id
stringlengths 10
10
| title
stringlengths 12
156
| abstract
stringlengths 279
2.02k
| full_text
sequence | qas
sequence | figures_and_tables
sequence |
---|---|---|---|---|---|
1909.00694 | Minimally Supervised Learning of Affective Events Using Discourse Relations | Recognizing affective events that trigger positive or negative sentiment has a wide range of natural language processing applications but remains a challenging problem mainly because the polarity of an event is not necessarily predictable from its constituent words. In this paper, we propose to propagate affective polarity using discourse relations. Our method is simple and only requires a very small seed lexicon and a large raw corpus. Our experiments using Japanese data show that our method learns affective events effectively without manually labeled data. It also improves supervised learning results when labeled data are small. | {
"section_name": [
"Introduction",
"Related Work",
"Proposed Method",
"Proposed Method ::: Polarity Function",
"Proposed Method ::: Discourse Relation-Based Event Pairs",
"Proposed Method ::: Discourse Relation-Based Event Pairs ::: AL (Automatically Labeled Pairs)",
"Proposed Method ::: Discourse Relation-Based Event Pairs ::: CA (Cause Pairs)",
"Proposed Method ::: Discourse Relation-Based Event Pairs ::: CO (Concession Pairs)",
"Proposed Method ::: Loss Functions",
"Experiments",
"Experiments ::: Dataset",
"Experiments ::: Dataset ::: AL, CA, and CO",
"Experiments ::: Dataset ::: ACP (ACP Corpus)",
"Experiments ::: Model Configurations",
"Experiments ::: Results and Discussion",
"Conclusion",
"Acknowledgments",
"Appendices ::: Seed Lexicon ::: Positive Words",
"Appendices ::: Seed Lexicon ::: Negative Words",
"Appendices ::: Settings of Encoder ::: BiGRU",
"Appendices ::: Settings of Encoder ::: BERT"
],
"paragraphs": [
[
"Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive).",
"Learning affective events is challenging because, as the examples above suggest, the polarity of an event is not necessarily predictable from its constituent words. Combined with the unbounded combinatorial nature of language, the non-compositionality of affective polarity entails the need for large amounts of world knowledge, which can hardly be learned from small annotated data.",
"In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.",
"We trained the models using a Japanese web corpus. Given the minimum amount of supervision, they performed well. In addition, the combination of annotated and unannotated data yielded a gain over a purely supervised baseline when labeled data were small."
],
[
"Learning affective events is closely related to sentiment analysis. Whereas sentiment analysis usually focuses on the polarity of what are described (e.g., movies), we work on how people are typically affected by events. In sentiment analysis, much attention has been paid to compositionality. Word-level polarity BIBREF5, BIBREF6, BIBREF7 and the roles of negation and intensification BIBREF8, BIBREF6, BIBREF9 are among the most important topics. In contrast, we are more interested in recognizing the sentiment polarity of an event that pertains to commonsense knowledge (e.g., getting money and catching cold).",
"Label propagation from seed instances is a common approach to inducing sentiment polarities. While BIBREF5 and BIBREF10 worked on word- and phrase-level polarities, BIBREF0 dealt with event-level polarities. BIBREF5 and BIBREF10 linked instances using co-occurrence information and/or phrase-level coordinations (e.g., “$A$ and $B$” and “$A$ but $B$”). We shift our scope to event pairs that are more complex than phrase pairs, and consequently exploit discourse connectives as event-level counterparts of phrase-level conjunctions.",
"BIBREF0 constructed a network of events using word embedding-derived similarities. Compared with this method, our discourse relation-based linking of events is much simpler and more intuitive.",
"Some previous studies made use of document structure to understand the sentiment. BIBREF11 proposed a sentiment-specific pre-training strategy using unlabeled dialog data (tweet-reply pairs). BIBREF12 proposed a method of building a polarity-tagged corpus (ACP Corpus). They automatically gathered sentences that had positive or negative opinions utilizing HTML layout structures in addition to linguistic patterns. Our method depends only on raw texts and thus has wider applicability.",
""
],
[
""
],
[
"",
"Our goal is to learn the polarity function $p(x)$, which predicts the sentiment polarity score of an event $x$. We approximate $p(x)$ by a neural network with the following form:",
"${\\rm Encoder}$ outputs a vector representation of the event $x$. ${\\rm Linear}$ is a fully-connected layer and transforms the representation into a scalar. ${\\rm tanh}$ is the hyperbolic tangent and transforms the scalar into a score ranging from $-1$ to 1. In Section SECREF21, we consider two specific implementations of ${\\rm Encoder}$.",
""
],
[
"Our method requires a very small seed lexicon and a large raw corpus. We assume that we can automatically extract discourse-tagged event pairs, $(x_{i1}, x_{i2})$ ($i=1, \\cdots $) from the raw corpus. We refer to $x_{i1}$ and $x_{i2}$ as former and latter events, respectively. As shown in Figure FIGREF1, we limit our scope to two discourse relations: Cause and Concession.",
"The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types.",
""
],
[
"The seed lexicon matches (1) the latter event but (2) not the former event, and (3) their discourse relation type is Cause or Concession. If the discourse relation type is Cause, the former event is given the same score as the latter. Likewise, if the discourse relation type is Concession, the former event is given the opposite of the latter's score. They are used as reference scores during training.",
""
],
[
"The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Cause. We assume the two events have the same polarities.",
""
],
[
"The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Concession. We assume the two events have the reversed polarities.",
""
],
[
"Using AL, CA, and CO data, we optimize the parameters of the polarity function $p(x)$. We define a loss function for each of the three types of event pairs and sum up the multiple loss functions.",
"We use mean squared error to construct loss functions. For the AL data, the loss function is defined as:",
"where $x_{i1}$ and $x_{i2}$ are the $i$-th pair of the AL data. $r_{i1}$ and $r_{i2}$ are the automatically-assigned scores of $x_{i1}$ and $x_{i2}$, respectively. $N_{\\rm AL}$ is the total number of AL pairs, and $\\lambda _{\\rm AL}$ is a hyperparameter.",
"For the CA data, the loss function is defined as:",
"$y_{i1}$ and $y_{i2}$ are the $i$-th pair of the CA pairs. $N_{\\rm CA}$ is the total number of CA pairs. $\\lambda _{\\rm CA}$ and $\\mu $ are hyperparameters. The first term makes the scores of the two events closer while the second term prevents the scores from shrinking to zero.",
"The loss function for the CO data is defined analogously:",
"The difference is that the first term makes the scores of the two events distant from each other.",
""
],
[
""
],
[
""
],
[
"As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.",
". 重大な失敗を犯したので、仕事をクビになった。",
"Because [I] made a serious mistake, [I] got fired.",
"From this sentence, we extracted the event pair of “重大な失敗を犯す” ([I] make a serious mistake) and “仕事をクビになる” ([I] get fired), and tagged it with Cause.",
"We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16."
],
[
"We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:",
". 作業が楽だ。",
"The work is easy.",
". 駐車場がない。",
"There is no parking lot.",
"Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.",
"The objective function for supervised training is:",
"",
"where $v_i$ is the $i$-th event, $R_i$ is the reference score of $v_i$, and $N_{\\rm ACP}$ is the number of the events of the ACP Corpus.",
"To optimize the hyperparameters, we used the dev set of the ACP Corpus. For the evaluation, we used the test set of the ACP Corpus. The model output was classified as positive if $p(x) > 0$ and negative if $p(x) \\le 0$.",
""
],
[
"As for ${\\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.",
"BERT BIBREF17 is a pre-trained multi-layer bidirectional Transformer BIBREF18 encoder. Its output is the final hidden state corresponding to the special classification tag ([CLS]). For the details of ${\\rm Encoder}$, see Sections SECREF30.",
"We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\\mathcal {L}_{\\rm AL}$, $\\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$, $\\mathcal {L}_{\\rm ACP}$, and $\\mathcal {L}_{\\rm ACP} + \\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$.",
""
],
[
"",
"Table TABREF23 shows accuracy. As the Random baseline suggests, positive and negative labels were distributed evenly. The Random+Seed baseline made use of the seed lexicon and output the corresponding label (or the reverse of it for negation) if the event's predicate is in the seed lexicon. We can see that the seed lexicon itself had practically no impact on prediction.",
"The models in the top block performed considerably better than the random baselines. The performance gaps with their (semi-)supervised counterparts, shown in the middle block, were less than 7%. This demonstrates the effectiveness of discourse relation-based label propagation.",
"Comparing the model variants, we obtained the highest score with the BiGRU encoder trained with the AL+CA+CO dataset. BERT was competitive but its performance went down if CA and CO were used in addition to AL. We conjecture that BERT was more sensitive to noises found more frequently in CA and CO.",
"Contrary to our expectations, supervised models (ACP) outperformed semi-supervised models (ACP+AL+CA+CO). This suggests that the training set of 0.6 million events is sufficiently large for training the models. For comparison, we trained the models with a subset (6,000 events) of the ACP dataset. As the results shown in Table TABREF24 demonstrate, our method is effective when labeled data are small.",
"The result of hyperparameter optimization for the BiGRU encoder was as follows:",
"As the CA and CO pairs were equal in size (Table TABREF16), $\\lambda _{\\rm CA}$ and $\\lambda _{\\rm CO}$ were comparable values. $\\lambda _{\\rm CA}$ was about one-third of $\\lambda _{\\rm CO}$, and this indicated that the CA pairs were noisier than the CO pairs. A major type of CA pairs that violates our assumption was in the form of “$\\textit {problem}_{\\text{negative}}$ causes $\\textit {solution}_{\\text{positive}}$”:",
". (悪いところがある, よくなるように努力する)",
"(there is a bad point, [I] try to improve [it])",
"The polarities of the two events were reversed in spite of the Cause relation, and this lowered the value of $\\lambda _{\\rm CA}$.",
"Some examples of model outputs are shown in Table TABREF26. The first two examples suggest that our model successfully learned negation without explicit supervision. Similarly, the next two examples differ only in voice but the model correctly recognized that they had opposite polarities. The last two examples share the predicate “落とす\" (drop) and only the objects are different. The second event “肩を落とす\" (lit. drop one's shoulders) is an idiom that expresses a disappointed feeling. The examples demonstrate that our model correctly learned non-compositional expressions.",
""
],
[
"In this paper, we proposed to use discourse relations to effectively propagate polarities of affective events from seeds. Experiments show that, even with a minimal amount of supervision, the proposed method performed well.",
"Although event pairs linked by discourse analysis are shown to be useful, they nevertheless contain noises. Adding linguistically-motivated filtering rules would help improve the performance."
],
[
"We thank Nobuhiro Kaji for providing the ACP Corpus and Hirokazu Kiyomaru and Yudai Kishimoto for their help in extracting event pairs. This work was partially supported by Yahoo! Japan Corporation."
],
[
"喜ぶ (rejoice), 嬉しい (be glad), 楽しい (be pleasant), 幸せ (be happy), 感動 (be impressed), 興奮 (be excited), 懐かしい (feel nostalgic), 好き (like), 尊敬 (respect), 安心 (be relieved), 感心 (admire), 落ち着く (be calm), 満足 (be satisfied), 癒される (be healed), and スッキリ (be refreshed)."
],
[
"怒る (get angry), 悲しい (be sad), 寂しい (be lonely), 怖い (be scared), 不安 (feel anxious), 恥ずかしい (be embarrassed), 嫌 (hate), 落ち込む (feel down), 退屈 (be bored), 絶望 (feel hopeless), 辛い (have a hard time), 困る (have trouble), 憂鬱 (be depressed), 心配 (be worried), and 情けない (be sorry)."
],
[
"The dimension of the embedding layer was 256. The embedding layer was initialized with the word embeddings pretrained using the Web corpus. The input sentences were segmented into words by the morphological analyzer Juman++. The vocabulary size was 100,000. The number of hidden layers was 2. The dimension of hidden units was 256. The optimizer was Momentum SGD BIBREF21. The mini-batch size was 1024. We ran 100 epochs and selected the snapshot that achieved the highest score for the dev set."
],
[
"We used a Japanese BERT model pretrained with Japanese Wikipedia. The input sentences were segmented into words by Juman++, and words were broken into subwords by applying BPE BIBREF20. The vocabulary size was 32,000. The maximum length of an input sequence was 128. The number of hidden layers was 12. The dimension of hidden units was 768. The number of self-attention heads was 12. The optimizer was Adam BIBREF19. The mini-batch size was 32. We ran 1 epoch."
]
]
} | {
"question": [
"What is the seed lexicon?",
"What are the results?",
"How are relations used to propagate polarity?",
"How big is the Japanese data?",
"What are labels available in dataset for supervision?",
"How big are improvements of supervszed learning results trained on smalled labeled data enhanced with proposed approach copared to basic approach?",
"How does their model learn using mostly raw data?",
"How big is seed lexicon used for training?",
"How large is raw corpus used for training?"
],
"question_id": [
"753990d0b621d390ed58f20c4d9e4f065f0dc672",
"9d578ddccc27dd849244d632dd0f6bf27348ad81",
"02e4bf719b1a504e385c35c6186742e720bcb281",
"44c4bd6decc86f1091b5fc0728873d9324cdde4e",
"86abeff85f3db79cf87a8c993e5e5aa61226dc98",
"c029deb7f99756d2669abad0a349d917428e9c12",
"39f8db10d949c6b477fa4b51e7c184016505884f",
"d0bc782961567dc1dd7e074b621a6d6be44bb5b4",
"a592498ba2fac994cd6fad7372836f0adb37e22a"
],
"nlp_background": [
"two",
"two",
"two",
"two",
"zero",
"zero",
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "a vocabulary of positive and negative predicates that helps determine the polarity score of an event",
"evidence": [
"The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types."
],
"highlighted_evidence": [
"The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event.",
"It is a "
]
},
{
"unanswerable": false,
"extractive_spans": [
"seed lexicon consists of positive and negative predicates"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types."
],
"highlighted_evidence": [
"The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event."
]
}
],
"annotation_id": [
"31e85022a847f37c15fd0415f3c450c74c8e4755",
"95da0a6e1b08db74a405c6a71067c9b272a50ff5"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Using all data to train: AL -- BiGRU achieved 0.843 accuracy, AL -- BERT achieved 0.863 accuracy, AL+CA+CO -- BiGRU achieved 0.866 accuracy, AL+CA+CO -- BERT achieved 0.835, accuracy, ACP -- BiGRU achieved 0.919 accuracy, ACP -- BERT achived 0.933, accuracy, ACP+AL+CA+CO -- BiGRU achieved 0.917 accuracy, ACP+AL+CA+CO -- BERT achieved 0.913 accuracy. \nUsing a subset to train: BERT achieved 0.876 accuracy using ACP (6K), BERT achieved 0.886 accuracy using ACP (6K) + AL, BiGRU achieved 0.830 accuracy using ACP (6K), BiGRU achieved 0.879 accuracy using ACP (6K) + AL + CA + CO.",
"evidence": [
"FLOAT SELECTED: Table 3: Performance of various models on the ACP test set.",
"FLOAT SELECTED: Table 4: Results for small labeled training data. Given the performance with the full dataset, we show BERT trained only with the AL data.",
"As for ${\\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.",
"We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\\mathcal {L}_{\\rm AL}$, $\\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$, $\\mathcal {L}_{\\rm ACP}$, and $\\mathcal {L}_{\\rm ACP} + \\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Performance of various models on the ACP test set.",
"FLOAT SELECTED: Table 4: Results for small labeled training data. Given the performance with the full dataset, we show BERT trained only with the AL data.",
"As for ${\\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. ",
"We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\\mathcal {L}_{\\rm AL}$, $\\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$, $\\mathcal {L}_{\\rm ACP}$, and $\\mathcal {L}_{\\rm ACP} + \\mathcal {L}_{\\rm AL} + \\mathcal {L}_{\\rm CA} + \\mathcal {L}_{\\rm CO}$."
]
}
],
"annotation_id": [
"1e5e867244ea656c4b7632628086209cf9bae5fa"
],
"worker_id": [
"2cfd959e433f290bb50b55722370f0d22fe090b7"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "based on the relation between events, the suggested polarity of one event can determine the possible polarity of the other event ",
"evidence": [
"In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event."
],
"highlighted_evidence": [
"As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event."
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "cause relation: both events in the relation should have the same polarity; concession relation: events should have opposite polarity",
"evidence": [
"In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.",
"The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types."
],
"highlighted_evidence": [
"As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.",
"The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation."
]
}
],
"annotation_id": [
"49a78a07d2eed545556a835ccf2eb40e5eee9801",
"acd6d15bd67f4b1496ee8af1c93c33e7d59c89e1"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "7000000 pairs of events were extracted from the Japanese Web corpus, 529850 pairs of events were extracted from the ACP corpus",
"evidence": [
"As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.",
"We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.",
"FLOAT SELECTED: Table 1: Statistics of the AL, CA, and CO datasets.",
"We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:",
"Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.",
"FLOAT SELECTED: Table 2: Details of the ACP dataset."
],
"highlighted_evidence": [
"As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. ",
"From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.",
"FLOAT SELECTED: Table 1: Statistics of the AL, CA, and CO datasets.",
"We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well.",
"Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.",
"FLOAT SELECTED: Table 2: Details of the ACP dataset."
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "The ACP corpus has around 700k events split into positive and negative polarity ",
"evidence": [
"FLOAT SELECTED: Table 2: Details of the ACP dataset."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Details of the ACP dataset."
]
}
],
"annotation_id": [
"36926a4c9e14352c91111150aa4c6edcc5c0770f",
"75b6dd28ccab20a70087635d89c2b22d0e99095c"
],
"worker_id": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"negative",
"positive"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive)."
],
"highlighted_evidence": [
"In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive)."
]
}
],
"annotation_id": [
"2d8c7df145c37aad905e48f64d8caa69e54434d4"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "3%",
"evidence": [
"FLOAT SELECTED: Table 4: Results for small labeled training data. Given the performance with the full dataset, we show BERT trained only with the AL data."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Results for small labeled training data. Given the performance with the full dataset, we show BERT trained only with the AL data."
]
}
],
"annotation_id": [
"df4372b2e8d9bb2039a5582f192768953b01d904"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "by exploiting discourse relations to propagate polarity from seed predicates to final sentiment polarity",
"evidence": [
"In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event."
],
"highlighted_evidence": [
"In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive)."
]
}
],
"annotation_id": [
"5c5bbc8af91c16af89b4ddd57ee6834be018e4e7"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "30 words",
"evidence": [
"We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16."
],
"highlighted_evidence": [
"We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. "
]
}
],
"annotation_id": [
"0206f2131f64a3e02498cedad1250971b78ffd0c"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"100 million sentences"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.",
"We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16."
],
"highlighted_evidence": [
"As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. ",
"From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO."
]
}
],
"annotation_id": [
"c36bad2758c4f9866d64c357c475d370595d937f"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
]
} | {
"caption": [
"Figure 1: An overview of our method. We focus on pairs of events, the former events and the latter events, which are connected with a discourse relation, CAUSE or CONCESSION. Dropped pronouns are indicated by brackets in English translations. We divide the event pairs into three types: AL, CA, and CO. In AL, the polarity of a latter event is automatically identified as either positive or negative, according to the seed lexicon (the positive word is colored red and the negative word blue). We propagate the latter event’s polarity to the former event. The same polarity as the latter event is used for the discourse relation CAUSE, and the reversed polarity for CONCESSION. In CA and CO, the latter event’s polarity is not known. Depending on the discourse relation, we encourage the two events’ polarities to be the same (CA) or reversed (CO). Details are given in Section 3.2.",
"Table 1: Statistics of the AL, CA, and CO datasets.",
"Table 2: Details of the ACP dataset.",
"Table 5: Examples of polarity scores predicted by the BiGRU model trained with AL+CA+CO.",
"Table 3: Performance of various models on the ACP test set.",
"Table 4: Results for small labeled training data. Given the performance with the full dataset, we show BERT trained only with the AL data."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"5-Table5-1.png",
"5-Table3-1.png",
"5-Table4-1.png"
]
} |
2003.07723 | PO-EMO: Conceptualization, Annotation, and Modeling of Aesthetic Emotions in German and English Poetry | Most approaches to emotion analysis regarding social media, literature, news, and other domains focus exclusively on basic emotion categories as defined by Ekman or Plutchik. However, art (such as literature) enables engagement in a broader range of more complex and subtle emotions that have been shown to also include mixed emotional responses. We consider emotions as they are elicited in the reader, rather than what is expressed in the text or intended by the author. Thus, we conceptualize a set of aesthetic emotions that are predictive of aesthetic appreciation in the reader, and allow the annotation of multiple labels per line to capture mixed emotions within context. We evaluate this novel setting in an annotation experiment both with carefully trained experts and via crowdsourcing. Our annotation with experts leads to an acceptable agreement of kappa=.70, resulting in a consistent dataset for future large scale analysis. Finally, we conduct first emotion classification experiments based on BERT, showing that identifying aesthetic emotions is challenging in our data, with up to .52 F1-micro on the German subset. Data and resources are available at https://github.com/tnhaider/poetry-emotion | {
"section_name": [
"",
" ::: ",
" ::: ::: ",
"Introduction",
"Related Work ::: Poetry in Natural Language Processing",
"Related Work ::: Emotion Annotation",
"Related Work ::: Emotion Classification",
"Data Collection",
"Data Collection ::: German",
"Data Collection ::: English",
"Expert Annotation",
"Expert Annotation ::: Workflow",
"Expert Annotation ::: Emotion Labels",
"Expert Annotation ::: Agreement",
"Crowdsourcing Annotation",
"Crowdsourcing Annotation ::: Data and Setup",
"Crowdsourcing Annotation ::: Results",
"Crowdsourcing Annotation ::: Comparing Experts with Crowds",
"Modeling",
"Concluding Remarks",
"Acknowledgements",
"Appendix",
"Appendix ::: Friedrich Hölderlin: Hälfte des Lebens (1804)",
"Appendix ::: Georg Trakl: In den Nachmittag geflüstert (1912)",
"Appendix ::: Walt Whitman: O Captain! My Captain! (1865)"
],
"paragraphs": [
[
"1.1em"
],
[
"1.1.1em"
],
[
"1.1.1.1em",
"Thomas Haider$^{1,3}$, Steffen Eger$^2$, Evgeny Kim$^3$, Roman Klinger$^3$, Winfried Menninghaus$^1$",
"$^{1}$Department of Language and Literature, Max Planck Institute for Empirical Aesthetics",
"$^{2}$NLLG, Department of Computer Science, Technische Universitat Darmstadt",
"$^{3}$Institut für Maschinelle Sprachverarbeitung, University of Stuttgart",
"{thomas.haider, w.m}@ae.mpg.de, [email protected]",
"{roman.klinger, evgeny.kim}@ims.uni-stuttgart.de",
"Most approaches to emotion analysis regarding social media, literature, news, and other domains focus exclusively on basic emotion categories as defined by Ekman or Plutchik. However, art (such as literature) enables engagement in a broader range of more complex and subtle emotions that have been shown to also include mixed emotional responses. We consider emotions as they are elicited in the reader, rather than what is expressed in the text or intended by the author. Thus, we conceptualize a set of aesthetic emotions that are predictive of aesthetic appreciation in the reader, and allow the annotation of multiple labels per line to capture mixed emotions within context. We evaluate this novel setting in an annotation experiment both with carefully trained experts and via crowdsourcing. Our annotation with experts leads to an acceptable agreement of $\\kappa =.70$, resulting in a consistent dataset for future large scale analysis. Finally, we conduct first emotion classification experiments based on BERT, showing that identifying aesthetic emotions is challenging in our data, with up to .52 F1-micro on the German subset. Data and resources are available at https://github.com/tnhaider/poetry-emotion.",
"Emotion, Aesthetic Emotions, Literature, Poetry, Annotation, Corpora, Emotion Recognition, Multi-Label"
],
[
"Emotions are central to human experience, creativity and behavior. Models of affect and emotion, both in psychology and natural language processing, commonly operate on predefined categories, designated either by continuous scales of, e.g., Valence, Arousal and Dominance BIBREF0 or discrete emotion labels (which can also vary in intensity). Discrete sets of emotions often have been motivated by theories of basic emotions, as proposed by Ekman1992—Anger, Fear, Joy, Disgust, Surprise, Sadness—and Plutchik1991, who added Trust and Anticipation. These categories are likely to have evolved as they motivate behavior that is directly relevant for survival. However, art reception typically presupposes a situation of safety and therefore offers special opportunities to engage in a broader range of more complex and subtle emotions. These differences between real-life and art contexts have not been considered in natural language processing work so far.",
"To emotionally move readers is considered a prime goal of literature since Latin antiquity BIBREF1, BIBREF2, BIBREF3. Deeply moved readers shed tears or get chills and goosebumps even in lab settings BIBREF4. In cases like these, the emotional response actually implies an aesthetic evaluation: narratives that have the capacity to move readers are evaluated as good and powerful texts for this very reason. Similarly, feelings of suspense experienced in narratives not only respond to the trajectory of the plot's content, but are also directly predictive of aesthetic liking (or disliking). Emotions that exhibit this dual capacity have been defined as “aesthetic emotions” BIBREF2. Contrary to the negativity bias of classical emotion catalogues, emotion terms used for aesthetic evaluation purposes include far more positive than negative emotions. At the same time, many overall positive aesthetic emotions encompass negative or mixed emotional ingredients BIBREF2, e.g., feelings of suspense include both hopeful and fearful anticipations.",
"For these reasons, we argue that the analysis of literature (with a focus on poetry) should rely on specifically selected emotion items rather than on the narrow range of basic emotions only. Our selection is based on previous research on this issue in psychological studies on art reception and, specifically, on poetry. For instance, knoop2016mapping found that Beauty is a major factor in poetry reception.",
"We primarily adopt and adapt emotion terms that schindler2017measuring have identified as aesthetic emotions in their study on how to measure and categorize such particular affective states. Further, we consider the aspect that, when selecting specific emotion labels, the perspective of annotators plays a major role. Whether emotions are elicited in the reader, expressed in the text, or intended by the author largely changes the permissible labels. For example, feelings of Disgust or Love might be intended or expressed in the text, but the text might still fail to elicit corresponding feelings as these concepts presume a strong reaction in the reader. Our focus here was on the actual emotional experience of the readers rather than on hypothetical intentions of authors. We opted for this reader perspective based on previous research in NLP BIBREF5, BIBREF6 and work in empirical aesthetics BIBREF7, that specifically measured the reception of poetry. Our final set of emotion labels consists of Beauty/Joy, Sadness, Uneasiness, Vitality, Suspense, Awe/Sublime, Humor, Annoyance, and Nostalgia.",
"In addition to selecting an adapted set of emotions, the annotation of poetry brings further challenges, one of which is the choice of the appropriate unit of annotation. Previous work considers words BIBREF8, BIBREF9, sentences BIBREF10, BIBREF11, utterances BIBREF12, sentence triples BIBREF13, or paragraphs BIBREF14 as the units of annotation. For poetry, reasonable units follow the logical document structure of poems, i.e., verse (line), stanza, and, owing to its relative shortness, the complete text. The more coarse-grained the unit, the more difficult the annotation is likely to be, but the more it may also enable the annotation of emotions in context. We find that annotating fine-grained units (lines) that are hierarchically ordered within a larger context (stanza, poem) caters to the specific structure of poems, where emotions are regularly mixed and are more interpretable within the whole poem. Consequently, we allow the mixing of emotions already at line level through multi-label annotation.",
"The remainder of this paper includes (1) a report of the annotation process that takes these challenges into consideration, (2) a description of our annotated corpora, and (3) an implementation of baseline models for the novel task of aesthetic emotion annotation in poetry. In a first study, the annotators work on the annotations in a closely supervised fashion, carefully reading each verse, stanza, and poem. In a second study, the annotations are performed via crowdsourcing within relatively short time periods with annotators not seeing the entire poem while reading the stanza. Using these two settings, we aim at obtaining a better understanding of the advantages and disadvantages of an expert vs. crowdsourcing setting in this novel annotation task. Particularly, we are interested in estimating the potential of a crowdsourcing environment for the task of self-perceived emotion annotation in poetry, given time and cost overhead associated with in-house annotation process (that usually involve training and close supervision of the annotators).",
"We provide the final datasets of German and English language poems annotated with reader emotions on verse level at https://github.com/tnhaider/poetry-emotion."
],
[
"Natural language understanding research on poetry has investigated stylistic variation BIBREF15, BIBREF16, BIBREF17, with a focus on broadly accepted formal features such as meter BIBREF18, BIBREF19, BIBREF20 and rhyme BIBREF21, BIBREF22, as well as enjambement BIBREF23, BIBREF24 and metaphor BIBREF25, BIBREF26. Recent work has also explored the relationship of poetry and prose, mainly on a syntactic level BIBREF27, BIBREF28. Furthermore, poetry also lends itself well to semantic (change) analysis BIBREF29, BIBREF30, as linguistic invention BIBREF31, BIBREF32 and succinctness BIBREF33 are at the core of poetic production.",
"Corpus-based analysis of emotions in poetry has been considered, but there is no work on German, and little on English. kao2015computational analyze English poems with word associations from the Harvard Inquirer and LIWC, within the categories positive/negative outlook, positive/negative emotion and phys./psych. well-being. hou-frank-2015-analyzing examine the binary sentiment polarity of Chinese poems with a weighted personalized PageRank algorithm. barros2013automatic followed a tagging approach with a thesaurus to annotate words that are similar to the words `Joy', `Anger', `Fear' and `Sadness' (moreover translating these from English to Spanish). With these word lists, they distinguish the categories `Love', `Songs to Lisi', `Satire' and `Philosophical-Moral-Religious' in Quevedo's poetry. Similarly, alsharif2013emotion classify unique Arabic `emotional text forms' based on word unigrams.",
"Mohanty2018 create a corpus of 788 poems in the Indian Odia language, annotate it on text (poem) level with binary negative and positive sentiment, and are able to distinguish these with moderate success. Sreeja2019 construct a corpus of 736 Indian language poems and annotate the texts on Ekman's six categories + Love + Courage. They achieve a Fleiss Kappa of .48.",
"In contrast to our work, these studies focus on basic emotions and binary sentiment polarity only, rather than addressing aesthetic emotions. Moreover, they annotate on the level of complete poems (instead of fine-grained verse and stanza-level)."
],
[
"Emotion corpora have been created for different tasks and with different annotation strategies, with different units of analysis and different foci of emotion perspective (reader, writer, text). Examples include the ISEAR dataset BIBREF34 (document-level); emotion annotation in children stories BIBREF10 and news headlines BIBREF35 (sentence-level); and fine-grained emotion annotation in literature by Kim2018 (phrase- and word-level). We refer the interested reader to an overview paper on existing corpora BIBREF36.",
"We are only aware of a limited number of publications which look in more depth into the emotion perspective. buechel-hahn-2017-emobank report on an annotation study that focuses both on writer's and reader's emotions associated with English sentences. The results show that the reader perspective yields better inter-annotator agreement. Yang2009 also study the difference between writer and reader emotions, but not with a modeling perspective. The authors find that positive reader emotions tend to be linked to positive writer emotions in online blogs."
],
[
"The task of emotion classification has been tackled before using rule-based and machine learning approaches. Rule-based emotion classification typically relies on lexical resources of emotionally charged words BIBREF9, BIBREF37, BIBREF8 and offers a straightforward and transparent way to detect emotions in text.",
"In contrast to rule-based approaches, current models for emotion classification are often based on neural networks and commonly use word embeddings as features. Schuff2017 applied models from the classes of CNN, BiLSTM, and LSTM and compare them to linear classifiers (SVM and MaxEnt), where the BiLSTM shows best results with the most balanced precision and recall. AbdulMageed2017 claim the highest F$_1$ with gated recurrent unit networks BIBREF38 for Plutchik's emotion model. More recently, shared tasks on emotion analysis BIBREF39, BIBREF40 triggered a set of more advanced deep learning approaches, including BERT BIBREF41 and other transfer learning methods BIBREF42."
],
[
"For our annotation and modeling studies, we build on top of two poetry corpora (in English and German), which we refer to as PO-EMO. This collection represents important contributions to the literary canon over the last 400 years. We make this resource available in TEI P5 XML and an easy-to-use tab separated format. Table TABREF9 shows a size overview of these data sets. Figure FIGREF8 shows the distribution of our data over time via density plots. Note that both corpora show a relative underrepresentation before the onset of the romantic period (around 1750)."
],
[
"The German corpus contains poems available from the website lyrik.antikoerperchen.de (ANTI-K), which provides a platform for students to upload essays about poems. The data is available in the Hypertext Markup Language, with clean line and stanza segmentation. ANTI-K also has extensive metadata, including author names, years of publication, numbers of sentences, poetic genres, and literary periods, that enable us to gauge the distribution of poems according to periods. The 158 poems we consider (731 stanzas) are dispersed over 51 authors and the New High German timeline (1575–1936 A.D.). This data has been annotated, besides emotions, for meter, rhythm, and rhyme in other studies BIBREF22, BIBREF43."
],
[
"The English corpus contains 64 poems of popular English writers. It was partly collected from Project Gutenberg with the GutenTag tool, and, in addition, includes a number of hand selected poems from the modern period and represents a cross section of popular English poets. We took care to include a number of female authors, who would have been underrepresented in a uniform sample. Time stamps in the corpus are organized by the birth year of the author, as assigned in Project Gutenberg."
],
[
"In the following, we will explain how we compiled and annotated three data subsets, namely, (1) 48 German poems with gold annotation. These were originally annotated by three annotators. The labels were then aggregated with majority voting and based on discussions among the annotators. Finally, they were curated to only include one gold annotation. (2) The remaining 110 German poems that are used to compute the agreement in table TABREF20 and (3) 64 English poems contain the raw annotation from two annotators.",
"We report the genesis of our annotation guidelines including the emotion classes. With the intention to provide a language resource for the computational analysis of emotion in poetry, we aimed at maximizing the consistency of our annotation, while doing justice to the diversity of poetry. We iteratively improved the guidelines and the annotation workflow by annotating in batches, cleaning the class set, and the compilation of a gold standard. The final overall cost of producing this expert annotated dataset amounts to approximately 3,500."
],
[
"The annotation process was initially conducted by three female university students majoring in linguistics and/or literary studies, which we refer to as our “expert annotators”. We used the INCePTION platform for annotation BIBREF44. Starting with the German poems, we annotated in batches of about 16 (and later in some cases 32) poems. After each batch, we computed agreement statistics including heatmaps, and provided this feedback to the annotators. For the first three batches, the three annotators produced a gold standard using a majority vote for each line. Where this was inconclusive, they developed an adjudicated annotation based on discussion. Where necessary, we encouraged the annotators to aim for more consistency, as most of the frequent switching of emotions within a stanza could not be reconstructed or justified.",
"In poems, emotions are regularly mixed (already on line level) and are more interpretable within the whole poem. We therefore annotate lines hierarchically within the larger context of stanzas and the whole poem. Hence, we instruct the annotators to read a complete stanza or full poem, and then annotate each line in the context of its stanza. To reflect on the emotional complexity of poetry, we allow a maximum of two labels per line while avoiding heavy label fluctuations by encouraging annotators to reflect on their feelings to avoid `empty' annotations. Rather, they were advised to use fewer labels and more consistent annotation. This additional constraint is necessary to avoid “wild”, non-reconstructable or non-justified annotations.",
"All subsequent batches (all except the first three) were only annotated by two out of the three initial annotators, coincidentally those two who had the lowest initial agreement with each other. We asked these two experts to use the generated gold standard (48 poems; majority votes of 3 annotators plus manual curation) as a reference (“if in doubt, annotate according to the gold standard”). This eliminated some systematic differences between them and markedly improved the agreement levels, roughly from 0.3–0.5 Cohen's $\\kappa $ in the first three batches to around 0.6–0.8 $\\kappa $ for all subsequent batches. This annotation procedure relaxes the reader perspective, as we encourage annotators (if in doubt) to annotate how they think the other annotators would annotate. However, we found that this formulation improves the usability of the data and leads to a more consistent annotation."
],
[
"We opt for measuring the reader perspective rather than the text surface or author's intent. To closer define and support conceptualizing our labels, we use particular `items', as they are used in psychological self-evaluations. These items consist of adjectives, verbs or short phrases. We build on top of schindler2017measuring who proposed 43 items that were then grouped by a factor analysis based on self-evaluations of participants. The resulting factors are shown in Table TABREF17. We attempt to cover all identified factors and supplement with basic emotions BIBREF46, BIBREF47, where possible.",
"We started with a larger set of labels to then delete and substitute (tone down) labels during the initial annotation process to avoid infrequent classes and inconsistencies. Further, we conflate labels if they show considerable confusion with each other. These iterative improvements particularly affected Confusion, Boredom and Other that were very infrequently annotated and had little agreement among annotators ($\\kappa <.2$). For German, we also removed Nostalgia ($\\kappa =.218$) after gold standard creation, but after consideration, added it back for English, then achieving agreement. Nostalgia is still available in the gold standard (then with a second label Beauty/Joy or Sadness to keep consistency). However, Confusion, Boredom and Other are not available in any sub-corpus.",
"Our final set consists of nine classes, i.e., (in order of frequency) Beauty/Joy, Sadness, Uneasiness, Vitality, Suspense, Awe/Sublime, Humor, Annoyance, and Nostalgia. In the following, we describe the labels and give further details on the aggregation process.",
"Annoyance (annoys me/angers me/felt frustrated): Annoyance implies feeling annoyed, frustrated or even angry while reading the line/stanza. We include the class Anger here, as this was found to be too strong in intensity.",
"Awe/Sublime (found it overwhelming/sense of greatness): Awe/Sublime implies being overwhelmed by the line/stanza, i.e., if one gets the impression of facing something sublime or if the line/stanza inspires one with awe (or that the expression itself is sublime). Such emotions are often associated with subjects like god, death, life, truth, etc. The term Sublime originated with kant2000critique as one of the first aesthetic emotion terms. Awe is a more common English term.",
"Beauty/Joy (found it beautiful/pleasing/makes me happy/joyful): kant2000critique already spoke of a “feeling of beauty”, and it should be noted that it is not a `merely pleasing emotion'. Therefore, in our pilot annotations, Beauty and Joy were separate labels. However, schindler2017measuring found that items for Beauty and Joy load into the same factors. Furthermore, our pilot annotations revealed, while Beauty is the more dominant and frequent feeling, both labels regularly accompany each other, and they often get confused across annotators. Therefore, we add Joy to form an inclusive label Beauty/Joy that increases annotation consistency.",
"Humor (found it funny/amusing): Implies feeling amused by the line/stanza or if it makes one laugh.",
"Nostalgia (makes me nostalgic): Nostalgia is defined as a sentimental longing for things, persons or situations in the past. It often carries both positive and negative feelings. However, since this label is quite infrequent, and not available in all subsets of the data, we annotated it with an additional Beauty/Joy or Sadness label to ensure annotation consistency.",
"Sadness (makes me sad/touches me): If the line/stanza makes one feel sad. It also includes a more general `being touched / moved'.",
"Suspense (found it gripping/sparked my interest): Choose Suspense if the line/stanza keeps one in suspense (if the line/stanza excites one or triggers one's curiosity). We further removed Anticipation from Suspense/Anticipation, as Anticipation appeared to us as being a more cognitive prediction whereas Suspense is a far more straightforward emotion item.",
"Uneasiness (found it ugly/unsettling/disturbing / frightening/distasteful): This label covers situations when one feels discomfort about the line/stanza (if the line/stanza feels distasteful/ugly, unsettling/disturbing or frightens one). The labels Ugliness and Disgust were conflated into Uneasiness, as both are seldom felt in poetry (being inadequate/too strong/high in arousal), and typically lead to Uneasiness.",
"Vitality (found it invigorating/spurs me on/inspires me): This label is meant for a line/stanza that has an inciting, encouraging effect (if the line/stanza conveys a feeling of movement, energy and vitality which animates to action). Similar terms are Activation and Stimulation."
],
[
"Table TABREF20 shows the Cohen's $\\kappa $ agreement scores among our two expert annotators for each emotion category $e$ as follows. We assign each instance (a line in a poem) a binary label indicating whether or not the annotator has annotated the emotion category $e$ in question. From this, we obtain vectors $v_i^e$, for annotators $i=0,1$, where each entry of $v_i^e$ holds the binary value for the corresponding line. We then apply the $\\kappa $ statistics to the two binary vectors $v_i^e$. Additionally to averaged $\\kappa $, we report micro-F1 values in Table TABREF21 between the multi-label annotations of both expert annotators as well as the micro-F1 score of a random baseline as well as of the majority emotion baseline (which labels each line as Beauty/Joy).",
"We find that Cohen $\\kappa $ agreement ranges from .84 for Uneasiness in the English data, .81 for Humor and Nostalgia, down to German Suspense (.65), Awe/Sublime (.61) and Vitality for both languages (.50 English, .63 German). Both annotators have a similar emotion frequency profile, where the ranking is almost identical, especially for German. However, for English, Annotator 2 annotates more Vitality than Uneasiness. Figure FIGREF18 shows the confusion matrices of labels between annotators as heatmaps. Notably, Beauty/Joy and Sadness are confused across annotators more often than other labels. This is topical for poetry, and therefore not surprising: One might argue that the beauty of beings and situations is only beautiful because it is not enduring and therefore not to divorce from the sadness of the vanishing of beauty BIBREF48. We also find considerable confusion of Sadness with Awe/Sublime and Vitality, while the latter is also regularly confused with Beauty/Joy.",
"Furthermore, as shown in Figure FIGREF23, we find that no single poem aggregates to more than six emotion labels, while no stanza aggregates to more than four emotion labels. However, most lines and stanzas prefer one or two labels. German poems seem more emotionally diverse where more poems have three labels than two labels, while the majority of English poems have only two labels. This is however attributable to the generally shorter English texts."
],
[
"After concluding the expert annotation, we performed a focused crowdsourcing experiment, based on the final label set and items as they are listed in Table TABREF27 and Section SECREF19. With this experiment, we aim to understand whether it is possible to collect reliable judgements for aesthetic perception of poetry from a crowdsourcing platform. A second goal is to see whether we can replicate the expensive expert annotations with less costly crowd annotations.",
"We opted for a maximally simple annotation environment, where we asked participants to annotate English 4-line stanzas with self-perceived reader emotions. We choose English due to the higher availability of English language annotators on crowdsourcing platforms. Each annotator rates each stanza independently of surrounding context."
],
[
"For consistency and to simplify the task for the annotators, we opt for a trade-off between completeness and granularity of the annotation. Specifically, we subselect stanzas composed of four verses from the corpus of 64 hand selected English poems. The resulting selection of 59 stanzas is uploaded to Figure Eight for annotation.",
"The annotators are asked to answer the following questions for each instance.",
"Question 1 (single-choice): Read the following stanza and decide for yourself which emotions it evokes.",
"Question 2 (multiple-choice): Which additional emotions does the stanza evoke?",
"The answers to both questions correspond to the emotion labels we defined to use in our annotation, as described in Section SECREF19. We add an additional answer choice “None” to Question 2 to allow annotators to say that a stanza does not evoke any additional emotions.",
"Each instance is annotated by ten people. We restrict the task geographically to the United Kingdom and Ireland and set the internal parameters on Figure Eight to only include the highest quality annotators to join the task. We pay 0.09 per instance. The final cost of the crowdsourcing experiment is 74."
],
[
"In the following, we determine the best aggregation strategy regarding the 10 annotators with bootstrap resampling. For instance, one could assign the label of a specific emotion to an instance if just one annotators picks it, or one could assign the label only if all annotators agree on this emotion. To evaluate this, we repeatedly pick two sets of 5 annotators each out of the 10 annotators for each of the 59 stanzas, 1000 times overall (i.e., 1000$\\times $59 times, bootstrap resampling). For each of these repetitions, we compare the agreement of these two groups of 5 annotators. Each group gets assigned with an adjudicated emotion which is accepted if at least one annotator picks it, at least two annotators pick it, etc. up to all five pick it.",
"We show the results in Table TABREF27. The $\\kappa $ scores show the average agreement between the two groups of five annotators, when the adjudicated class is picked based on the particular threshold of annotators with the same label choice. We see that some emotions tend to have higher agreement scores than others, namely Annoyance (.66), Sadness (up to .52), and Awe/Sublime, Beauty/Joy, Humor (all .46). The maximum agreement is reached mostly with a threshold of 2 (4 times) or 3 (3 times).",
"We further show in the same table the average numbers of labels from each strategy. Obviously, a lower threshold leads to higher numbers (corresponding to a disjunction of annotations for each emotion). The drop in label counts is comparably drastic, with on average 18 labels per class. Overall, the best average $\\kappa $ agreement (.32) is less than half of what we saw for the expert annotators (roughly .70). Crowds especially disagree on many more intricate emotion labels (Uneasiness, Vitality, Nostalgia, Suspense).",
"We visualize how often two emotions are used to label an instance in a confusion table in Figure FIGREF18. Sadness is used most often to annotate a stanza, and it is often confused with Suspense, Uneasiness, and Nostalgia. Further, Beauty/Joy partially overlaps with Awe/Sublime, Nostalgia, and Sadness.",
"On average, each crowd annotator uses two emotion labels per stanza (56% of cases); only in 36% of the cases the annotators use one label, and in 6% and 1% of the cases three and four labels, respectively. This contrasts with the expert annotators, who use one label in about 70% of the cases and two labels in 30% of the cases for the same 59 four-liners. Concerning frequency distribution for emotion labels, both experts and crowds name Sadness and Beauty/Joy as the most frequent emotions (for the `best' threshold of 3) and Nostalgia as one of the least frequent emotions. The Spearman rank correlation between experts and crowds is about 0.55 with respect to the label frequency distribution, indicating that crowds could replace experts to a moderate degree when it comes to extracting, e.g., emotion distributions for an author or time period. Now, we further compare crowds and experts in terms of whether crowds could replicate expert annotations also on a finer stanza level (rather than only on a distributional level)."
],
[
"To gauge the quality of the crowd annotations in comparison with our experts, we calculate agreement on the emotions between experts and an increasing group size from the crowd. For each stanza instance $s$, we pick $N$ crowd workers, where $N\\in \\lbrace 4,6,8,10\\rbrace $, then pick their majority emotion for $s$, and additionally pick their second ranked majority emotion if at least $\\frac{N}{2}-1$ workers have chosen it. For the experts, we aggregate their emotion labels on stanza level, then perform the same strategy for selection of emotion labels. Thus, for $s$, both crowds and experts have 1 or 2 emotions. For each emotion, we then compute Cohen's $\\kappa $ as before. Note that, compared to our previous experiments in Section SECREF26 with a threshold, each stanza now receives an emotion annotation (exactly one or two emotion labels), both by the experts and the crowd-workers.",
"In Figure FIGREF30, we plot agreement between experts and crowds on stanza level as we vary the number $N$ of crowd workers involved. On average, there is roughly a steady linear increase in agreement as $N$ grows, which may indicate that $N=20$ or $N=30$ would still lead to better agreement. Concerning individual emotions, Nostalgia is the emotion with the least agreement, as opposed to Sadness (in our sample of 59 four-liners): the agreement for this emotion grows from $.47$ $\\kappa $ with $N=4$ to $.65$ $\\kappa $ with $N=10$. Sadness is also the most frequent emotion, both according to experts and crowds. Other emotions for which a reasonable agreement is achieved are Annoyance, Awe/Sublime, Beauty/Joy, Humor ($\\kappa $ > 0.2). Emotions with little agreement are Vitality, Uneasiness, Suspense, Nostalgia ($\\kappa $ < 0.2).",
"By and large, we note from Figure FIGREF18 that expert annotation is more restrictive, with experts agreeing more often on particular emotion labels (seen in the darker diagonal). The results of the crowdsourcing experiment, on the other hand, are a mixed bag as evidenced by a much sparser distribution of emotion labels. However, we note that these differences can be caused by 1) the disparate training procedure for the experts and crowds, and 2) the lack of opportunities for close supervision and on-going training of the crowds, as opposed to the in-house expert annotators.",
"In general, however, we find that substituting experts with crowds is possible to a certain degree. Even though the crowds' labels look inconsistent at first view, there appears to be a good signal in their aggregated annotations, helping to approximate expert annotations to a certain degree. The average $\\kappa $ agreement (with the experts) we get from $N=10$ crowd workers (0.24) is still considerably below the agreement among the experts (0.70)."
],
[
"To estimate the difficulty of automatic classification of our data set, we perform multi-label document classification (of stanzas) with BERT BIBREF41. For this experiment we aggregate all labels for a stanza and sort them by frequency, both for the gold standard and the raw expert annotations. As can be seen in Figure FIGREF23, a stanza bears a minimum of one and a maximum of four emotions. Unfortunately, the label Nostalgia is only available 16 times in the German data (the gold standard) as a second label (as discussed in Section SECREF19). None of our models was able to learn this label for German. Therefore we omit it, leaving us with eight proper labels.",
"We use the code and the pre-trained BERT models of Farm, provided by deepset.ai. We test the multilingual-uncased model (Multiling), the german-base-cased model (Base), the german-dbmdz-uncased model (Dbmdz), and we tune the Base model on 80k stanzas of the German Poetry Corpus DLK BIBREF30 for 2 epochs, both on token (masked words) and sequence (next line) prediction (Base$_{\\textsc {Tuned}}$).",
"We split the randomized German dataset so that each label is at least 10 times in the validation set (63 instances, 113 labels), and at least 10 times in the test set (56 instances, 108 labels) and leave the rest for training (617 instances, 946 labels). We train BERT for 10 epochs (with a batch size of 8), optimize with entropy loss, and report F1-micro on the test set. See Table TABREF36 for the results.",
"We find that the multilingual model cannot handle infrequent categories, i.e., Awe/Sublime, Suspense and Humor. However, increasing the dataset with English data improves the results, suggesting that the classification would largely benefit from more annotated data. The best model overall is DBMDZ (.520), showing a balanced response on both validation and test set. See Table TABREF37 for a breakdown of all emotions as predicted by the this model. Precision is mostly higher than recall. The labels Awe/Sublime, Suspense and Humor are harder to predict than the other labels.",
"The BASE and BASE$_{\\textsc {TUNED}}$ models perform slightly worse than DBMDZ. The effect of tuning of the BASE model is questionable, probably because of the restricted vocabulary (30k). We found that tuning on poetry does not show obvious improvements. Lastly, we find that models that were trained on lines (instead of stanzas) do not achieve the same F1 (~.42 for the German models)."
],
[
"In this paper, we presented a dataset of German and English poetry annotated with reader response to reading poetry. We argued that basic emotions as proposed by psychologists (such as Ekman and Plutchik) that are often used in emotion analysis from text are of little use for the annotation of poetry reception. We instead conceptualized aesthetic emotion labels and showed that a closely supervised annotation task results in substantial agreement—in terms of $\\kappa $ score—on the final dataset.",
"The task of collecting reader-perceived emotion response to poetry in a crowdsourcing setting is not straightforward. In contrast to expert annotators, who were closely supervised and reflected upon the task, the annotators on crowdsourcing platforms are difficult to control and may lack necessary background knowledge to perform the task at hand. However, using a larger number of crowd annotators may lead to finding an aggregation strategy with a better trade-off between quality and quantity of adjudicated labels. For future work, we thus propose to repeat the experiment with larger number of crowdworkers, and develop an improved training strategy that would suit the crowdsourcing environment.",
"The dataset presented in this paper can be of use for different application scenarios, including multi-label emotion classification, style-conditioned poetry generation, investigating the influence of rhythm/prosodic features on emotion, or analysis of authors, genres and diachronic variation (e.g., how emotions are represented differently in certain periods).",
"Further, though our modeling experiments are still rudimentary, we propose that this data set can be used to investigate the intra-poem relations either through multi-task learning BIBREF49 and/or with the help of hierarchical sequence classification approaches."
],
[
"A special thanks goes to Gesine Fuhrmann, who created the guidelines and tirelessly documented the annotation progress. Also thanks to Annika Palm and Debby Trzeciak who annotated and gave lively feedback. For help with the conceptualization of labels we thank Ines Schindler. This research has been partially conducted within the CRETA center (http://www.creta.uni-stuttgart.de/) which is funded by the German Ministry for Education and Research (BMBF) and partially funded by the German Research Council (DFG), projects SEAT (Structured Multi-Domain Emotion Analysis from Text, KL 2869/1-1). This work has also been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) at the Technische Universität Darmstadt under grant No. GRK 1994/1."
],
[
"We illustrate two examples of our German gold standard annotation, a poem each by Friedrich Hölderlin and Georg Trakl, and an English poem by Walt Whitman. Hölderlin's text stands out, because the mood changes starkly from the first stanza to the second, from Beauty/Joy to Sadness. Trakl's text is a bit more complex with bits of Nostalgia and, most importantly, a mixture of Uneasiness with Awe/Sublime. Whitman's poem is an example of Vitality and its mixing with Sadness. The English annotation was unified by us for space constraints. For the full annotation please see https://github.com/tnhaider/poetry-emotion/"
],
[
""
],
[
""
],
[
""
]
]
} | {
"question": [
"Does the paper report macro F1?",
"How is the annotation experiment evaluated?",
"What are the aesthetic emotions formalized?"
],
"question_id": [
"3a9d391d25cde8af3334ac62d478b36b30079d74",
"8d8300d88283c73424c8f301ad9fdd733845eb47",
"48b12eb53e2d507343f19b8a667696a39b719807"
],
"nlp_background": [
"two",
"two",
"two"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"German",
"German",
"German"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"FLOAT SELECTED: Table 7: Recall and precision scores of the best model (dbmdz) for each emotion on the test set. ‘Support’ signifies the number of labels."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 7: Recall and precision scores of the best model (dbmdz) for each emotion on the test set. ‘Support’ signifies the number of labels."
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"We find that the multilingual model cannot handle infrequent categories, i.e., Awe/Sublime, Suspense and Humor. However, increasing the dataset with English data improves the results, suggesting that the classification would largely benefit from more annotated data. The best model overall is DBMDZ (.520), showing a balanced response on both validation and test set. See Table TABREF37 for a breakdown of all emotions as predicted by the this model. Precision is mostly higher than recall. The labels Awe/Sublime, Suspense and Humor are harder to predict than the other labels.",
"FLOAT SELECTED: Table 7: Recall and precision scores of the best model (dbmdz) for each emotion on the test set. ‘Support’ signifies the number of labels."
],
"highlighted_evidence": [
"See Table TABREF37 for a breakdown of all emotions as predicted by the this model.",
"FLOAT SELECTED: Table 7: Recall and precision scores of the best model (dbmdz) for each emotion on the test set. ‘Support’ signifies the number of labels."
]
}
],
"annotation_id": [
"0220672a84e5d828ec90f8ee65ab39414cd170f7",
"bac3f916c426a5809d910072100fdf12ad3fc30d"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"confusion matrices of labels between annotators"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We find that Cohen $\\kappa $ agreement ranges from .84 for Uneasiness in the English data, .81 for Humor and Nostalgia, down to German Suspense (.65), Awe/Sublime (.61) and Vitality for both languages (.50 English, .63 German). Both annotators have a similar emotion frequency profile, where the ranking is almost identical, especially for German. However, for English, Annotator 2 annotates more Vitality than Uneasiness. Figure FIGREF18 shows the confusion matrices of labels between annotators as heatmaps. Notably, Beauty/Joy and Sadness are confused across annotators more often than other labels. This is topical for poetry, and therefore not surprising: One might argue that the beauty of beings and situations is only beautiful because it is not enduring and therefore not to divorce from the sadness of the vanishing of beauty BIBREF48. We also find considerable confusion of Sadness with Awe/Sublime and Vitality, while the latter is also regularly confused with Beauty/Joy."
],
"highlighted_evidence": [
"Figure FIGREF18 shows the confusion matrices of labels between annotators as heatmaps."
]
}
],
"annotation_id": [
"218914b3ebf4fe7a1026f109cf02b0c3e37905b6"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"feelings of suspense experienced in narratives not only respond to the trajectory of the plot's content, but are also directly predictive of aesthetic liking (or disliking)",
"Emotions that exhibit this dual capacity have been defined as “aesthetic emotions”"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"To emotionally move readers is considered a prime goal of literature since Latin antiquity BIBREF1, BIBREF2, BIBREF3. Deeply moved readers shed tears or get chills and goosebumps even in lab settings BIBREF4. In cases like these, the emotional response actually implies an aesthetic evaluation: narratives that have the capacity to move readers are evaluated as good and powerful texts for this very reason. Similarly, feelings of suspense experienced in narratives not only respond to the trajectory of the plot's content, but are also directly predictive of aesthetic liking (or disliking). Emotions that exhibit this dual capacity have been defined as “aesthetic emotions” BIBREF2. Contrary to the negativity bias of classical emotion catalogues, emotion terms used for aesthetic evaluation purposes include far more positive than negative emotions. At the same time, many overall positive aesthetic emotions encompass negative or mixed emotional ingredients BIBREF2, e.g., feelings of suspense include both hopeful and fearful anticipations."
],
"highlighted_evidence": [
"Deeply moved readers shed tears or get chills and goosebumps even in lab settings BIBREF4. In cases like these, the emotional response actually implies an aesthetic evaluation: narratives that have the capacity to move readers are evaluated as good and powerful texts for this very reason. Similarly, feelings of suspense experienced in narratives not only respond to the trajectory of the plot's content, but are also directly predictive of aesthetic liking (or disliking). Emotions that exhibit this dual capacity have been defined as “aesthetic emotions” BIBREF2."
]
}
],
"annotation_id": [
"1d2fb096ab206ab6e9b50087134e1ef663a855d1"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Figure 1: Temporal distribution of poetry corpora (Kernel Density Plots with bandwidth = 0.2).",
"Table 1: Statistics on our poetry corpora PO-EMO.",
"Table 2: Aesthetic Emotion Factors (Schindler et al., 2017).",
"Table 3: Cohen’s kappa agreement levels and normalized line-level emotion frequencies for expert annotators (Nostalgia is not available in the German data).",
"Table 4: Top: averaged kappa scores and micro-F1 agreement scores, taking one annotator as gold. Bottom: Baselines.",
"Figure 2: Emotion cooccurrence matrices for the German and English expert annotation experiments and the English crowdsourcing experiment.",
"Figure 3: Distribution of number of distinct emotion labels per logical document level in the expert-based annotation. No whole poem has more than 6 emotions. No stanza has more than 4 emotions.",
"Table 5: Results obtained via boostrapping for annotation aggregation. The row Threshold shows how many people within a group of five annotators should agree on a particular emotion. The column labeled Counts shows the average number of times certain emotion was assigned to a stanza given the threshold. Cells with ‘–’ mean that neither of two groups satisfied the threshold.",
"Figure 4: Agreement between experts and crowds as a function of the number N of crowd workers.",
"Table 6: BERT-based multi-label classification on stanzalevel.",
"Table 7: Recall and precision scores of the best model (dbmdz) for each emotion on the test set. ‘Support’ signifies the number of labels."
],
"file": [
"3-Figure1-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"5-Figure2-1.png",
"6-Figure3-1.png",
"7-Table5-1.png",
"7-Figure4-1.png",
"8-Table6-1.png",
"8-Table7-1.png"
]
} |
1705.09665 | Community Identity and User Engagement in a Multi-Community Landscape | A community's identity defines and shapes its internal dynamics. Our current understanding of this interplay is mostly limited to glimpses gathered from isolated studies of individual communities. In this work we provide a systematic exploration of the nature of this relation across a wide variety of online communities. To this end we introduce a quantitative, language-based typology reflecting two key aspects of a community's identity: how distinctive, and how temporally dynamic it is. By mapping almost 300 Reddit communities into the landscape induced by this typology, we reveal regularities in how patterns of user engagement vary with the characteristics of a community. Our results suggest that the way new and existing users engage with a community depends strongly and systematically on the nature of the collective identity it fosters, in ways that are highly consequential to community maintainers. For example, communities with distinctive and highly dynamic identities are more likely to retain their users. However, such niche communities also exhibit much larger acculturation gaps between existing users and newcomers, which potentially hinder the integration of the latter. More generally, our methodology reveals differences in how various social phenomena manifest across communities, and shows that structuring the multi-community landscape can lead to a better understanding of the systematic nature of this diversity. | {
"section_name": [
"Introduction",
"A typology of community identity",
"Overview and intuition",
"Language-based formalization",
"Community-level measures",
"Applying the typology to Reddit",
"Community identity and user retention",
"Community-type and monthly retention",
"Community-type and user tenure",
"Community identity and acculturation",
"Community identity and content affinity",
"Further related work",
"Conclusion and future work",
"Acknowledgements"
],
"paragraphs": [
[
"“If each city is like a game of chess, the day when I have learned the rules, I shall finally possess my empire, even if I shall never succeed in knowing all the cities it contains.”",
"",
"— Italo Calvino, Invisible Cities",
"A community's identity—defined through the common interests and shared experiences of its users—shapes various facets of the social dynamics within it BIBREF0 , BIBREF1 , BIBREF2 . Numerous instances of this interplay between a community's identity and social dynamics have been extensively studied in the context of individual online communities BIBREF3 , BIBREF4 , BIBREF5 . However, the sheer variety of online platforms complicates the task of generalizing insights beyond these isolated, single-community glimpses. A new way to reason about the variation across multiple communities is needed in order to systematically characterize the relationship between properties of a community and the dynamics taking place within.",
"One especially important component of community dynamics is user engagement. We can aim to understand why users join certain communities BIBREF6 , what factors influence user retention BIBREF7 , and how users react to innovation BIBREF5 . While striking patterns of user engagement have been uncovered in prior case studies of individual communities BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , we do not know whether these observations hold beyond these cases, or when we can draw analogies between different communities. Are there certain types of communities where we can expect similar or contrasting engagement patterns?",
"To address such questions quantitatively we need to provide structure to the diverse and complex space of online communities. Organizing the multi-community landscape would allow us to both characterize individual points within this space, and reason about systematic variations in patterns of user engagement across the space.",
"Present work: Structuring the multi-community space. In order to systematically understand the relationship between community identityand user engagement we introduce a quantitative typology of online communities. Our typology is based on two key aspects of community identity: how distinctive—or niche—a community's interests are relative to other communities, and how dynamic—or volatile—these interests are over time. These axes aim to capture the salience of a community's identity and dynamics of its temporal evolution.",
"Our main insight in implementing this typology automatically and at scale is that the language used within a community can simultaneously capture how distinctive and dynamic its interests are. This language-based approach draws on a wealth of literature characterizing linguistic variation in online communities and its relationship to community and user identity BIBREF16 , BIBREF5 , BIBREF17 , BIBREF18 , BIBREF19 . Basing our typology on language is also convenient since it renders our framework immediately applicable to a wide variety of online communities, where communication is primarily recorded in a textual format.",
"Using our framework, we map almost 300 Reddit communities onto the landscape defined by the two axes of our typology (Section SECREF2 ). We find that this mapping induces conceptually sound categorizations that effectively capture key aspects of community-level social dynamics. In particular, we quantitatively validate the effectiveness of our mapping by showing that our two-dimensional typology encodes signals that are predictive of community-level rates of user retention, complementing strong activity-based features.",
"Engagement and community identity. We apply our framework to understand how two important aspects of user engagement in a community—the community's propensity to retain its users (Section SECREF3 ), and its permeability to new members (Section SECREF4 )—vary according to the type of collective identity it fosters. We find that communities that are characterized by specialized, constantly-updating content have higher user retention rates, but also exhibit larger linguistic gaps that separate newcomers from established members.",
"More closely examining factors that could contribute to this linguistic gap, we find that especially within distinctive communities, established users have an increased propensity to engage with the community's specialized content, compared to newcomers (Section SECREF5 ). Interestingly, while established members of distinctive communities more avidly respond to temporal updates than newcomers, in more generic communities it is the outsiders who engage more with volatile content, perhaps suggesting that such content may serve as an entry-point to the community (but not necessarily a reason to stay). Such insights into the relation between collective identity and user engagement can be informative to community maintainers seeking to better understand growth patterns within their online communities.",
"More generally, our methodology stands as an example of how sociological questions can be addressed in a multi-community setting. In performing our analyses across a rich variety of communities, we reveal both the diversity of phenomena that can occur, as well as the systematic nature of this diversity."
],
[
"A community's identity derives from its members' common interests and shared experiences BIBREF15 , BIBREF20 . In this work, we structure the multi-community landscape along these two key dimensions of community identity: how distinctive a community's interests are, and how dynamic the community is over time.",
"We now proceed to outline our quantitative typology, which maps communities along these two dimensions. We start by providing an intuition through inspecting a few example communities. We then introduce a generalizable language-based methodology and use it to map a large set of Reddit communities onto the landscape defined by our typology of community identity."
],
[
"In order to illustrate the diversity within the multi-community space, and to provide an intuition for the underlying structure captured by the proposed typology, we first examine a few example communities and draw attention to some key social dynamics that occur within them.",
"We consider four communities from Reddit: in Seahawks, fans of the Seahawks football team gather to discuss games and players; in BabyBumps, expecting mothers trade advice and updates on their pregnancy; Cooking consists of recipe ideas and general discussion about cooking; while in pics, users share various images of random things (like eels and hornets). We note that these communities are topically contrasting and foster fairly disjoint user bases. Additionally, these communities exhibit varied patterns of user engagement. While Seahawks maintains a devoted set of users from month to month, pics is dominated by transient users who post a few times and then depart.",
"Discussions within these communities also span varied sets of interests. Some of these interests are more specific to the community than others: risotto, for example, is seldom a discussion point beyond Cooking. Additionally, some interests consistently recur, while others are specific to a particular time: kitchens are a consistent focus point for cooking, but mint is only in season during spring. Coupling specificity and consistency we find interests such as easter, which isn't particularly specific to BabyBumps but gains prominence in that community around Easter (see Figure FIGREF3 .A for further examples).",
"These specific interests provide a window into the nature of the communities' interests as a whole, and by extension their community identities. Overall, discussions in Cooking focus on topics which are highly distinctive and consistently recur (like risotto). In contrast, discussions in Seahawks are highly dynamic, rapidly shifting over time as new games occur and players are traded in and out. In the remainder of this section we formally introduce a methodology for mapping communities in this space defined by their distinctiveness and dynamicity (examples in Figure FIGREF3 .B)."
],
[
"Our approach follows the intuition that a distinctive community will use language that is particularly specific, or unique, to that community. Similarly, a dynamic community will use volatile language that rapidly changes across successive windows of time. To capture this intuition automatically, we start by defining word-level measures of specificity and volatility. We then extend these word-level primitives to characterize entire comments, and the community itself.",
"Our characterizations of words in a community are motivated by methodology from prior literature that compares the frequency of a word in a particular setting to its frequency in some background distribution, in order to identify instances of linguistic variation BIBREF21 , BIBREF19 . Our particular framework makes this comparison by way of pointwise mutual information (PMI).",
"In the following, we use INLINEFORM0 to denote one community within a set INLINEFORM1 of communities, and INLINEFORM2 to denote one time period within the entire history INLINEFORM3 of INLINEFORM4 . We account for temporal as well as inter-community variation by computing word-level measures for each time period of each community's history, INLINEFORM5 . Given a word INLINEFORM6 used within a particular community INLINEFORM7 at time INLINEFORM8 , we define two word-level measures:",
"Specificity. We quantify the specificity INLINEFORM0 of INLINEFORM1 to INLINEFORM2 by calculating the PMI of INLINEFORM3 and INLINEFORM4 , relative to INLINEFORM5 , INLINEFORM6 ",
"where INLINEFORM0 is INLINEFORM1 's frequency in INLINEFORM2 . INLINEFORM3 is specific to INLINEFORM4 if it occurs more frequently in INLINEFORM5 than in the entire set INLINEFORM6 , hence distinguishing this community from the rest. A word INLINEFORM7 whose occurrence is decoupled from INLINEFORM8 , and thus has INLINEFORM9 close to 0, is said to be generic.",
"We compute values of INLINEFORM0 for each time period INLINEFORM1 in INLINEFORM2 ; in the above description we drop the time-based subscripts for clarity.",
"Volatility. We quantify the volatility INLINEFORM0 of INLINEFORM1 to INLINEFORM2 as the PMI of INLINEFORM3 and INLINEFORM4 relative to INLINEFORM5 , the entire history of INLINEFORM6 : INLINEFORM7 ",
"A word INLINEFORM0 is volatile at time INLINEFORM1 in INLINEFORM2 if it occurs more frequently at INLINEFORM3 than in the entire history INLINEFORM4 , behaving as a fad within a small window of time. A word that occurs with similar frequency across time, and hence has INLINEFORM5 close to 0, is said to be stable.",
"Extending to utterances. Using our word-level primitives, we define the specificity of an utterance INLINEFORM0 in INLINEFORM1 , INLINEFORM2 as the average specificity of each word in the utterance. The volatility of utterances is defined analogously.",
""
],
[
"Having described these word-level measures, we now proceed to establish the primary axes of our typology:",
"Distinctiveness. A community with a very distinctive identity will tend to have distinctive interests, expressed through specialized language. Formally, we define the distinctiveness of a community INLINEFORM0 as the average specificity of all utterances in INLINEFORM1 . We refer to a community with a less distinctive identity as being generic.",
"Dynamicity. A highly dynamic community constantly shifts interests from one time window to another, and these temporal variations are reflected in its use of volatile language. Formally, we define the dynamicity of a community INLINEFORM0 as the average volatility of all utterances in INLINEFORM1 . We refer to a community whose language is relatively consistent throughout time as being stable.",
"In our subsequent analyses, we focus mostly on examing the average distinctiveness and dynamicity of a community over time, denoted INLINEFORM0 and INLINEFORM1 ."
],
[
"We now explain how our typology can be applied to the particular setting of Reddit, and describe the overall behaviour of our linguistic axes in this context.",
"Dataset description. Reddit is a popular website where users form and participate in discussion-based communities called subreddits. Within these communities, users post content—such as images, URLs, or questions—which often spark vibrant lengthy discussions in thread-based comment sections.",
"The website contains many highly active subreddits with thousands of active subscribers. These communities span an extremely rich variety of topical interests, as represented by the examples described earlier. They also vary along a rich multitude of structural dimensions, such as the number of users, the amount of conversation and social interaction, and the social norms determining which types of content become popular. The diversity and scope of Reddit's multicommunity ecosystem make it an ideal landscape in which to closely examine the relation between varying community identities and social dynamics.",
"Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. This results in 283 communities ( INLINEFORM0 ), for a total of 4,872 community-months ( INLINEFORM1 ).",
"Estimating linguistic measures. We estimate word frequencies INLINEFORM0 , and by extension each downstream measure, in a carefully controlled manner in order to ensure we capture robust and meaningful linguistic behaviour. First, we only consider top-level comments which are initial responses to a post, as the content of lower-level responses might reflect conventions of dialogue more than a community's high-level interests. Next, in order to prevent a few highly active users from dominating our frequency estimates, we count each unique word once per user, ignoring successive uses of the same word by the same user. This ensures that our word-level characterizations are not skewed by a small subset of highly active contributors.",
"In our subsequent analyses, we will only look at these measures computed over the nouns used in comments. In principle, our framework can be applied to any choice of vocabulary. However, in the case of Reddit using nouns provides a convenient degree of interpretability. We can easily understand the implication of a community preferentially mentioning a noun such as gamer or feminist, but interpreting the overuse of verbs or function words such as take or of is less straightforward. Additionally, in focusing on nouns we adopt the view emphasized in modern “third wave” accounts of sociolinguistic variation, that stylistic variation is inseparable from topical content BIBREF23 . In the case of online communities, the choice of what people choose to talk about serves as a primary signal of social identity. That said, a typology based on more purely stylistic differences is an interesting avenue for future work.",
"Accounting for rare words. One complication when using measures such as PMI, which are based off of ratios of frequencies, is that estimates for very infrequent words could be overemphasized BIBREF24 . Words that only appear a few times in a community tend to score at the extreme ends of our measures (e.g. as highly specific or highly generic), obfuscating the impact of more frequent words in the community. To address this issue, we discard the long tail of infrequent words in our analyses, using only the top 5th percentile of words, by frequency within each INLINEFORM0 , to score comments and communities.",
"Typology output on Reddit. The distribution of INLINEFORM0 and INLINEFORM1 across Reddit communities is shown in Figure FIGREF3 .B, along with examples of communities at the extremes of our typology. We find that interpretable groupings of communities emerge at various points within our axes. For instance, highly distinctive and dynamic communities tend to focus on rapidly-updating interests like sports teams and games, while generic and consistent communities tend to be large “link-sharing” hubs where users generally post content with no clear dominating themes. More examples of communities at the extremes of our typology are shown in Table TABREF9 .",
"We note that these groupings capture abstract properties of a community's content that go beyond its topic. For instance, our typology relates topically contrasting communities such as yugioh (which is about a popular trading card game) and Seahawks through the shared trait that their content is particularly distinctive. Additionally, the axes can clarify differences between topically similar communities: while startrek and thewalkingdead both focus on TV shows, startrek is less dynamic than the median community, while thewalkingdead is among the most dynamic communities, as the show was still airing during the years considered."
],
[
"We have seen that our typology produces qualitatively satisfying groupings of communities according to the nature of their collective identity. This section shows that there is an informative and highly predictive relationship between a community's position in this typology and its user engagement patterns. We find that communities with distinctive and dynamic identities have higher rates of user engagement, and further show that a community's position in our identity-based landscape holds important predictive information that is complementary to a strong activity baseline.",
"In particular user retention is one of the most crucial aspects of engagement and is critical to community maintenance BIBREF2 . We quantify how successful communities are at retaining users in terms of both short and long-term commitment. Our results indicate that rates of user retention vary drastically, yet systematically according to how distinctive and dynamic a community is (Figure FIGREF3 ).",
"We find a strong, explanatory relationship between the temporal consistency of a community's identity and rates of user engagement: dynamic communities that continually update and renew their discussion content tend to have far higher rates of user engagement. The relationship between distinctiveness and engagement is less universal, but still highly informative: niche communities tend to engender strong, focused interest from users at one particular point in time, though this does not necessarily translate into long-term retention."
],
[
"We find that dynamic communities, such as Seahawks or starcraft, have substantially higher rates of monthly user retention than more stable communities (Spearman's INLINEFORM0 = 0.70, INLINEFORM1 0.001, computed with community points averaged over months; Figure FIGREF11 .A, left). Similarly, more distinctive communities, like Cooking and Naruto, exhibit moderately higher monthly retention rates than more generic communities (Spearman's INLINEFORM2 = 0.33, INLINEFORM3 0.001; Figure FIGREF11 .A, right).",
"Monthly retention is formally defined as the proportion of users who contribute in month INLINEFORM0 and then return to contribute again in month INLINEFORM1 . Each monthly datapoint is treated as unique and the trends in Figure FIGREF11 show 95% bootstrapped confidence intervals, cluster-resampled at the level of subreddit BIBREF25 , to account for differences in the number of months each subreddit contributes to the data.",
"Importantly, we find that in the task of predicting community-level user retention our identity-based typology holds additional predictive value on top of strong baseline features based on community-size (# contributing users) and activity levels (mean # contributions per user), which are commonly used for churn prediction BIBREF7 . We compared out-of-sample predictive performance via leave-one-community-out cross validation using random forest regressors with ensembles of size 100, and otherwise default hyperparameters BIBREF26 . A model predicting average monthly retention based on a community's average distinctiveness and dynamicity achieves an average mean squared error ( INLINEFORM0 ) of INLINEFORM1 and INLINEFORM2 , while an analogous model predicting based on a community's size and average activity level (both log-transformed) achieves INLINEFORM4 and INLINEFORM5 . The difference between the two models is not statistically significant ( INLINEFORM6 , Wilcoxon signed-rank test). However, combining features from both models results in a large and statistically significant improvement over each independent model ( INLINEFORM7 , INLINEFORM8 , INLINEFORM9 Bonferroni-corrected pairwise Wilcoxon tests). These results indicate that our typology can explain variance in community-level retention rates, and provides information beyond what is present in standard activity-based features."
],
[
"As with monthly retention, we find a strong positive relationship between a community's dynamicity and the average number of months that a user will stay in that community (Spearman's INLINEFORM0 = 0.41, INLINEFORM1 0.001, computed over all community points; Figure FIGREF11 .B, left). This verifies that the short-term trend observed for monthly retention translates into longer-term engagement and suggests that long-term user retention might be strongly driven by the extent to which a community continually provides novel content. Interestingly, there is no significant relationship between distinctiveness and long-term engagement (Spearman's INLINEFORM2 = 0.03, INLINEFORM3 0.77; Figure FIGREF11 .B, right). Thus, while highly distinctive communities like RandomActsOfMakeup may generate focused commitment from users over a short period of time, such communities are unlikely to retain long-term users unless they also have sufficiently dynamic content.",
"To measure user tenures we focused on one slice of data (May, 2013) and measured how many months a user spends in each community, on average—the average number of months between a user's first and last comment in each community. We have activity data up until May 2015, so the maximum tenure is 24 months in this set-up, which is exceptionally long relative to the average community member (throughout our entire data less than INLINEFORM0 of users have tenures of more than 24 months in any community)."
],
[
"The previous section shows that there is a strong connection between the nature of a community's identity and its basic user engagement patterns. In this section, we probe the relationship between a community's identity and how permeable, or accessible, it is to outsiders.",
"We measure this phenomenon using what we call the acculturation gap, which compares the extent to which engaged vs. non-engaged users employ community-specific language. While previous work has found this gap to be large and predictive of future user engagement in two beer-review communities BIBREF5 , we find that the size of the acculturation gap depends strongly on the nature of a community's identity, with the gap being most pronounced in stable, highly distinctive communities (Figure FIGREF13 ).",
"This finding has important implications for our understanding of online communities. Though many works have analyzed the dynamics of “linguistic belonging” in online communities BIBREF16 , BIBREF28 , BIBREF5 , BIBREF17 , our results suggest that the process of linguistically fitting in is highly contingent on the nature of a community's identity. At one extreme, in generic communities like pics or worldnews there is no distinctive, linguistic identity for users to adopt.",
"To measure the acculturation gap for a community, we follow Danescu-Niculescu-Mizil et al danescu-niculescu-mizilno2013 and build “snapshot language models” (SLMs) for each community, which capture the linguistic state of a community at one point of time. Using these language models we can capture how linguistically close a particular utterance is to the community by measuring the cross-entropy of this utterance relative to the SLM: DISPLAYFORM0 ",
"where INLINEFORM0 is the probability assigned to bigram INLINEFORM1 from comment INLINEFORM2 in community-month INLINEFORM3 . We build the SLMs by randomly sampling 200 active users—defined as users with at least 5 comments in the respective community and month. For each of these 200 active users we select 5 random 10-word spans from 5 unique comments. To ensure robustness and maximize data efficiency, we construct 100 SLMs for each community-month pair that has enough data, bootstrap-resampling from the set of active users.",
"We compute a basic measure of the acculturation gap for a community-month INLINEFORM0 as the relative difference of the cross-entropy of comments by users active in INLINEFORM1 with that of singleton comments by outsiders—i.e., users who only ever commented once in INLINEFORM2 , but who are still active in Reddit in general: DISPLAYFORM0 ",
" INLINEFORM0 denotes the distribution over singleton comments, INLINEFORM1 denotes the distribution over comments from users active in INLINEFORM2 , and INLINEFORM3 the expected values of the cross-entropy over these respective distributions. For each bootstrap-sampled SLM we compute the cross-entropy of 50 comments by active users (10 comments from 5 randomly sampled active users, who were not used to construct the SLM) and 50 comments from randomly-sampled outsiders.",
"Figure FIGREF13 .A shows that the acculturation gap varies substantially with how distinctive and dynamic a community is. Highly distinctive communities have far higher acculturation gaps, while dynamicity exhibits a non-linear relationship: relatively stable communities have a higher linguistic `entry barrier', as do very dynamic ones. Thus, in communities like IAmA (a general Q&A forum) that are very generic, with content that is highly, but not extremely dynamic, outsiders are at no disadvantage in matching the community's language. In contrast, the acculturation gap is large in stable, distinctive communities like Cooking that have consistent community-specific language. The gap is also large in extremely dynamic communities like Seahawks, which perhaps require more attention or interest on the part of active users to keep up-to-date with recent trends in content.",
"These results show that phenomena like the acculturation gap, which were previously observed in individual communities BIBREF28 , BIBREF5 , cannot be easily generalized to a larger, heterogeneous set of communities. At the same time, we see that structuring the space of possible communities enables us to observe systematic patterns in how such phenomena vary."
],
[
"Through the acculturation gap, we have shown that communities exhibit large yet systematic variations in their permeability to outsiders. We now turn to understanding the divide in commenting behaviour between outsiders and active community members at a finer granularity, by focusing on two particular ways in which such gaps might manifest among users: through different levels of engagement with specific content and with temporally volatile content.",
"Echoing previous results, we find that community type mediates the extent and nature of the divide in content affinity. While in distinctive communities active members have a higher affinity for both community-specific content and for highly volatile content, the opposite is true for generic communities, where it is the outsiders who engage more with volatile content.",
"We quantify these divides in content affinity by measuring differences in the language of the comments written by active users and outsiders. Concretely, for each community INLINEFORM0 , we define the specificity gap INLINEFORM1 as the relative difference between the average specificity of comments authored by active members, and by outsiders, where these measures are macroaveraged over users. Large, positive INLINEFORM2 then occur in communities where active users tend to engage with substantially more community-specific content than outsiders.",
"We analogously define the volatility gap INLINEFORM0 as the relative difference in volatilities of active member and outsider comments. Large, positive values of INLINEFORM1 characterize communities where active users tend to have more volatile interests than outsiders, while negative values indicate communities where active users tend to have more stable interests.",
"We find that in 94% of communities, INLINEFORM0 , indicating (somewhat unsurprisingly) that in almost all communities, active users tend to engage with more community-specific content than outsiders. However, the magnitude of this divide can vary greatly: for instance, in Homebrewing, which is dedicated to brewing beer, the divide is very pronounced ( INLINEFORM1 0.33) compared to funny, a large hub where users share humorous content ( INLINEFORM2 0.011).",
"The nature of the volatility gap is comparatively more varied. In Homebrewing ( INLINEFORM0 0.16), as in 68% of communities, active users tend to write more volatile comments than outsiders ( INLINEFORM1 0). However, communities like funny ( INLINEFORM2 -0.16), where active users contribute relatively stable comments compared to outsiders ( INLINEFORM3 0), are also well-represented on Reddit.",
"To understand whether these variations manifest systematically across communities, we examine the relationship between divides in content affinity and community type. In particular, following the intuition that active users have a relatively high affinity for a community's niche, we expect that the distinctiveness of a community will be a salient mediator of specificity and volatility gaps. Indeed, we find a strong correlation between a community's distinctiveness and its specificity gap (Spearman's INLINEFORM0 0.34, INLINEFORM1 0.001).",
"We also find a strong correlation between distinctiveness and community volatility gaps (Spearman's INLINEFORM0 0.53, INLINEFORM1 0.001). In particular, we see that among the most distinctive communities (i.e., the top third of communities by distinctiveness), active users tend to write more volatile comments than outsiders (mean INLINEFORM2 0.098), while across the most generic communities (i.e., the bottom third), active users tend to write more stable comments (mean INLINEFORM3 -0.047, Mann-Whitney U test INLINEFORM4 0.001). The relative affinity of outsiders for volatile content in these communities indicates that temporally ephemeral content might serve as an entry point into such a community, without necessarily engaging users in the long term."
],
[
"Our language-based typology and analysis of user engagement draws on and contributes to several distinct research threads, in addition to the many foundational studies cited in the previous sections.",
"Multicommunity studies. Our investigation of user engagement in multicommunity settings follows prior literature which has examined differences in user and community dynamics across various online groups, such as email listservs. Such studies have primarily related variations in user behaviour to structural features such as group size and volume of content BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 . In focusing on the linguistic content of communities, we extend this research by providing a content-based framework through which user engagement can be examined.",
"Reddit has been a particularly useful setting for studying multiple communities in prior work. Such studies have mostly focused on characterizing how individual users engage across a multi-community platform BIBREF34 , BIBREF35 , or on specific user engagement patterns such as loyalty to particular communities BIBREF22 . We complement these studies by seeking to understand how features of communities can mediate a broad array of user engagement patterns within them.",
"Typologies of online communities. Prior attempts to typologize online communities have primarily been qualitative and based on hand-designed categories, making them difficult to apply at scale. These typologies often hinge on having some well-defined function the community serves, such as supporting a business or non-profit cause BIBREF36 , which can be difficult or impossible to identify in massive, anonymous multi-community settings. Other typologies emphasize differences in communication platforms and other functional requirements BIBREF37 , BIBREF38 , which are important but preclude analyzing differences between communities within the same multi-community platform. Similarly, previous computational methods of characterizing multiple communities have relied on the presence of markers such as affixes in community names BIBREF35 , or platform-specific affordances such as evaluation mechanisms BIBREF39 .",
"Our typology is also distinguished from community detection techniques that rely on structural or functional categorizations BIBREF40 , BIBREF41 . While the focus of those studies is to identify and characterize sub-communities within a larger social network, our typology provides a characterization of pre-defined communities based on the nature of their identity.",
"Broader work on collective identity. Our focus on community identity dovetails with a long line of research on collective identity and user engagement, in both online and offline communities BIBREF42 , BIBREF1 , BIBREF2 . These studies focus on individual-level psychological manifestations of collective (or social) identity, and their relationship to user engagement BIBREF42 , BIBREF43 , BIBREF44 , BIBREF0 .",
"In contrast, we seek to characterize community identities at an aggregate level and in an interpretable manner, with the goal of systematically organizing the diverse space of online communities. Typologies of this kind are critical to these broader, social-psychological studies of collective identity: they allow researchers to systematically analyze how the psychological manifestations and implications of collective identity vary across diverse sets of communities."
],
[
"Our current understanding of engagement patterns in online communities is patched up from glimpses offered by several disparate studies focusing on a few individual communities. This work calls into attention the need for a method to systematically reason about similarities and differences across communities. By proposing a way to structure the multi-community space, we find not only that radically contrasting engagement patterns emerge in different parts of this space, but also that this variation can be at least partly explained by the type of identity each community fosters.",
"Our choice in this work is to structure the multi-community space according to a typology based on community identity, as reflected in language use. We show that this effectively explains cross-community variation of three different user engagement measures—retention, acculturation and content affinity—and complements measures based on activity and size with additional interpretable information. For example, we find that in niche communities established members are more likely to engage with volatile content than outsiders, while the opposite is true in generic communities. Such insights can be useful for community maintainers seeking to understand engagement patterns in their own communities.",
"One main area of future research is to examine the temporal dynamics in the multi-community landscape. By averaging our measures of distinctiveness and dynamicity across time, our present study treated community identity as a static property. However, as communities experience internal changes and respond to external events, we can expect the nature of their identity to shift as well. For instance, the relative consistency of harrypotter may be disrupted by the release of a new novel, while Seahawks may foster different identities during and between football seasons. Conversely, a community's type may also mediate the impact of new events. Moving beyond a static view of community identity could enable us to better understand how temporal phenomena such as linguistic change manifest across different communities, and also provide a more nuanced view of user engagement—for instance, are communities more welcoming to newcomers at certain points in their lifecycle?",
"Another important avenue of future work is to explore other ways of mapping the landscape of online communities. For example, combining structural properties of communities BIBREF40 with topical information BIBREF35 and with our identity-based measures could further characterize and explain variations in user engagement patterns. Furthermore, extending the present analyses to even more diverse communities supported by different platforms (e.g., GitHub, StackExchange, Wikipedia) could enable the characterization of more complex behavioral patterns such as collaboration and altruism, which become salient in different multicommunity landscapes."
],
[
"The authors thank Liye Fu, Jack Hessel, David Jurgens and Lillian Lee for their helpful comments. This research has been supported in part by a Discovery and Innovation Research Seed Award from the Office of the Vice Provost for Research at Cornell, NSF CNS-1010921, IIS-1149837, IIS-1514268 NIH BD2K, ARO MURI, DARPA XDATA, DARPA SIMPLEX, DARPA NGS2, Stanford Data Science Initiative, SAP Stanford Graduate Fellowship, NSERC PGS-D, Boeing, Lightspeed, and Volkswagen. "
]
]
} | {
"question": [
"Do they report results only on English data?",
"How do the various social phenomena examined manifest in different types of communities?",
"What patterns do they observe about how user engagement varies with the characteristics of a community?",
"How did the select the 300 Reddit communities for comparison?",
"How do the authors measure how temporally dynamic a community is?",
"How do the authors measure how distinctive a community is?"
],
"question_id": [
"003f884d3893532f8c302431c9f70be6f64d9be8",
"bb97537a0a7c8f12a3f65eba73cefa6abcd2f2b2",
"eea089baedc0ce80731c8fdcb064b82f584f483a",
"edb2d24d6d10af13931b3a47a6543bd469752f0c",
"938cf30c4f1d14fa182e82919e16072fdbcf2a82",
"93f4ad6568207c9bd10d712a52f8de25b3ebadd4"
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"topic_background": [
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. This results in 283 communities ( INLINEFORM0 ), for a total of 4,872 community-months ( INLINEFORM1 )."
],
"highlighted_evidence": [
"We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. "
]
},
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"04ae0cc420f69540ca11707ab8ecc07a89f803f7",
"31d8f8ed7ba40b27c480f7caf7cfb48fba47bb07"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Dynamic communities have substantially higher rates of monthly user retention than more stable communities. More distinctive communities exhibit moderately higher monthly retention rates than more generic communities. There is also a strong positive relationship between a community's dynamicity and the average number of months that a user will stay in that community - a short-term trend observed for monthly retention translates into longer-term engagement and suggests that long-term user retention might be strongly driven by the extent to which a community continually provides novel content.\n",
"evidence": [
"We find that dynamic communities, such as Seahawks or starcraft, have substantially higher rates of monthly user retention than more stable communities (Spearman's INLINEFORM0 = 0.70, INLINEFORM1 0.001, computed with community points averaged over months; Figure FIGREF11 .A, left). Similarly, more distinctive communities, like Cooking and Naruto, exhibit moderately higher monthly retention rates than more generic communities (Spearman's INLINEFORM2 = 0.33, INLINEFORM3 0.001; Figure FIGREF11 .A, right).",
"As with monthly retention, we find a strong positive relationship between a community's dynamicity and the average number of months that a user will stay in that community (Spearman's INLINEFORM0 = 0.41, INLINEFORM1 0.001, computed over all community points; Figure FIGREF11 .B, left). This verifies that the short-term trend observed for monthly retention translates into longer-term engagement and suggests that long-term user retention might be strongly driven by the extent to which a community continually provides novel content. Interestingly, there is no significant relationship between distinctiveness and long-term engagement (Spearman's INLINEFORM2 = 0.03, INLINEFORM3 0.77; Figure FIGREF11 .B, right). Thus, while highly distinctive communities like RandomActsOfMakeup may generate focused commitment from users over a short period of time, such communities are unlikely to retain long-term users unless they also have sufficiently dynamic content."
],
"highlighted_evidence": [
"We find that dynamic communities, such as Seahawks or starcraft, have substantially higher rates of monthly user retention than more stable communities (Spearman's INLINEFORM0 = 0.70, INLINEFORM1 0.001, computed with community points averaged over months; Figure FIGREF11 .A, left). Similarly, more distinctive communities, like Cooking and Naruto, exhibit moderately higher monthly retention rates than more generic communities (Spearman's INLINEFORM2 = 0.33, INLINEFORM3 0.001; Figure FIGREF11 .A, right).",
"As with monthly retention, we find a strong positive relationship between a community's dynamicity and the average number of months that a user will stay in that community (Spearman's INLINEFORM0 = 0.41, INLINEFORM1 0.001, computed over all community points; Figure FIGREF11 .B, left). This verifies that the short-term trend observed for monthly retention translates into longer-term engagement and suggests that long-term user retention might be strongly driven by the extent to which a community continually provides novel content. Interestingly, there is no significant relationship between distinctiveness and long-term engagement (Spearman's INLINEFORM2 = 0.03, INLINEFORM3 0.77; Figure FIGREF11 .B, right). Thus, while highly distinctive communities like RandomActsOfMakeup may generate focused commitment from users over a short period of time, such communities are unlikely to retain long-term users unless they also have sufficiently dynamic content."
]
}
],
"annotation_id": [
"8a080f37fbbb5c6700422a346b944ef535fa725b"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"communities that are characterized by specialized, constantly-updating content have higher user retention rates, but also exhibit larger linguistic gaps that separate newcomers from established members",
"within distinctive communities, established users have an increased propensity to engage with the community's specialized content, compared to newcomers "
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Engagement and community identity. We apply our framework to understand how two important aspects of user engagement in a community—the community's propensity to retain its users (Section SECREF3 ), and its permeability to new members (Section SECREF4 )—vary according to the type of collective identity it fosters. We find that communities that are characterized by specialized, constantly-updating content have higher user retention rates, but also exhibit larger linguistic gaps that separate newcomers from established members.",
"More closely examining factors that could contribute to this linguistic gap, we find that especially within distinctive communities, established users have an increased propensity to engage with the community's specialized content, compared to newcomers (Section SECREF5 ). Interestingly, while established members of distinctive communities more avidly respond to temporal updates than newcomers, in more generic communities it is the outsiders who engage more with volatile content, perhaps suggesting that such content may serve as an entry-point to the community (but not necessarily a reason to stay). Such insights into the relation between collective identity and user engagement can be informative to community maintainers seeking to better understand growth patterns within their online communities."
],
"highlighted_evidence": [
"We find that communities that are characterized by specialized, constantly-updating content have higher user retention rates, but also exhibit larger linguistic gaps that separate newcomers from established members.",
"More closely examining factors that could contribute to this linguistic gap, we find that especially within distinctive communities, established users have an increased propensity to engage with the community's specialized content, compared to newcomers (Section SECREF5 ). "
]
}
],
"annotation_id": [
"f64ff06cfd16f9bd339512a6e85f0a7bc8b670f4"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "They selected all the subreddits from January 2013 to December 2014 with at least 500 words in the vocabulary and at least 4 months of the subreddit's history. They also removed communities with the bulk of the contributions are in foreign language.",
"evidence": [
"Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. This results in 283 communities ( INLINEFORM0 ), for a total of 4,872 community-months ( INLINEFORM1 )."
],
"highlighted_evidence": [
"Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. "
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "They collect subreddits from January 2013 to December 2014,2 for which there are at\nleast 500 words in the vocabulary used to estimate the measures,\nin at least 4 months of the subreddit’s history. They compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language.",
"evidence": [
"Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. This results in 283 communities ( INLINEFORM0 ), for a total of 4,872 community-months ( INLINEFORM1 )."
],
"highlighted_evidence": [
"Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. This results in 283 communities ( INLINEFORM0 ), for a total of 4,872 community-months ( INLINEFORM1 )."
]
}
],
"annotation_id": [
"2c804f9b9543e3b085fbd1fff87f0fde688f1484",
"78de92427e9e37b0dfdc19f57b735e65cec40e0a"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"the average volatility of all utterances"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Dynamicity. A highly dynamic community constantly shifts interests from one time window to another, and these temporal variations are reflected in its use of volatile language. Formally, we define the dynamicity of a community INLINEFORM0 as the average volatility of all utterances in INLINEFORM1 . We refer to a community whose language is relatively consistent throughout time as being stable."
],
"highlighted_evidence": [
". A highly dynamic community constantly shifts interests from one time window to another, and these temporal variations are reflected in its use of volatile language. Formally, we define the dynamicity of a community INLINEFORM0 as the average volatility of all utterances in INLINEFORM1 . "
]
}
],
"annotation_id": [
"62d30e963bf86e9b2d454adbd4b2c4dc3107cd11"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" the average specificity of all utterances"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Distinctiveness. A community with a very distinctive identity will tend to have distinctive interests, expressed through specialized language. Formally, we define the distinctiveness of a community INLINEFORM0 as the average specificity of all utterances in INLINEFORM1 . We refer to a community with a less distinctive identity as being generic."
],
"highlighted_evidence": [
"A community with a very distinctive identity will tend to have distinctive interests, expressed through specialized language. Formally, we define the distinctiveness of a community INLINEFORM0 as the average specificity of all utterances in INLINEFORM1 "
]
}
],
"annotation_id": [
"21484dfac315192bb69aee597ebf5d100ff5925b"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
} | {
"caption": [
"Figure 1: A: Within a community certain words are more community-specific and temporally volatile than others. For instance, words like onesies are highly specific to the BabyBumps community (top left corner), while words like easter are temporally ephemeral. B: Extending these word-level measures to communities, we can measure the overall distinctiveness and dynamicity of a community, which are highly associated with user retention rates (colored heatmap; see Section 3). Communities like Seahawks (a football team) and Cooking use highly distinctive language. Moreover, Seahawks uses very dynamic language, as the discussion continually shifts throughout the football season. In contrast, the content of Cooking remains stable over time, as does the content of pics; though these communities do have ephemeral fads, the overall themes discussed generally remain stable.",
"Table 1: Examples of communities on Reddit which occur at the extremes (top and bottom quartiles) of our typology.",
"Figure 2: A: The monthly retention rate for communities differs drastically according to their position in our identity-based typology, with dynamicity being the strongest signal of higher user retention (x-axes bin community-months by percentiles; in all subsequent plots, error bars indicate 95% bootstrapped confidence intervals). B: Dynamicity also correlates with long-term user retention, measured as the number of months the average user spends in the community; however, distinctiveness does not correlate with this longer-term variant of user retention.",
"Figure 3: A: There is substantial variation in the direction and magnitude of the acculturation gap, which quantifies the extent to which established members of a community are linguistically differentiated from outsiders. Among 60% of communities this gap is positive, indicating that established users match the community’s language more than outsiders. B: The size of the acculturation gap varies systematically according to how dynamic and distinctive a community is. Distinctive communities exhibit larger gaps; as do relatively stable, and very dynamic communities."
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"5-Figure2-1.png",
"6-Figure3-1.png"
]
} |
1908.06606 | Question Answering based Clinical Text Structuring Using Pre-trained Language Model | Clinical text structuring is a critical and fundamental task for clinical research. Traditional methods such as taskspecific end-to-end models and pipeline models usually suffer from the lack of dataset and error propagation. In this paper, we present a question answering based clinical text structuring (QA-CTS) task to unify different specific tasks and make dataset shareable. A novel model that aims to introduce domain-specific features (e.g., clinical named entity information) into pre-trained language model is also proposed for QA-CTS task. Experimental results on Chinese pathology reports collected from Ruijing Hospital demonstrate our presented QA-CTS task is very effective to improve the performance on specific tasks. Our proposed model also competes favorably with strong baseline models in specific tasks. | {
"section_name": [
"Introduction",
"Related Work ::: Clinical Text Structuring",
"Related Work ::: Pre-trained Language Model",
"Question Answering based Clinical Text Structuring",
"The Proposed Model for QA-CTS Task",
"The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text",
"The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information",
"The Proposed Model for QA-CTS Task ::: Integration Method",
"The Proposed Model for QA-CTS Task ::: Final Prediction",
"The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism",
"Experimental Studies",
"Experimental Studies ::: Dataset and Evaluation Metrics",
"Experimental Studies ::: Experimental Settings",
"Experimental Studies ::: Comparison with State-of-the-art Methods",
"Experimental Studies ::: Ablation Analysis",
"Experimental Studies ::: Comparisons Between Two Integration Methods",
"Experimental Studies ::: Data Integration Analysis",
"Conclusion",
"Acknowledgment"
],
"paragraphs": [
[
"Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.",
"However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.",
"Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance.",
"To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.",
"We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.",
"Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task.",
"The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6."
],
[
"Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods.",
"Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster.",
"Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job.",
"Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component."
],
[
"Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to.",
"The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain."
],
[
"Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$.",
"Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本:小弯长11.5cm,大弯长17.0cm。距上切端6.0cm、下切端8.0cm\" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离\"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement.",
"Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data."
],
[
"In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word."
],
[
"For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information.",
"The text input is constructed as `[CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model."
],
[
"Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model.",
"The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除\" is tagged as an operation, `11.5' is a number word and `cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively."
],
[
"There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows.",
"While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix.",
"$Attention$ denotes the traditional attention and it can be defined as follows.",
"where $d_k$ is the length of hidden vector."
],
[
"The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\\left\\langle l_s, 2\\right\\rangle $ where $l_s$ denotes the length of sequence.",
"Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed.",
"where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively."
],
[
"Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features.",
"Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model."
],
[
"In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold."
],
[
"Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.",
"In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer."
],
[
"To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts."
],
[
"Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.",
"Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score."
],
[
"To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\\times $ refers to removing that part from our model.",
"As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model."
],
[
"There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head.",
"From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge.",
"Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance."
],
[
"To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information.",
"As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score.",
"Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well.",
"Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way."
],
[
"In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset."
],
[
"We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research\" (No. 2018YFC0910500)."
]
]
} | {
"question": [
"What data is the language model pretrained on?",
"What baselines is the proposed model compared against?",
"How is the clinical text structuring task defined?",
"What are the specific tasks being unified?",
"Is all text in this dataset a question, or are there unrelated sentences in between questions?",
"How many questions are in the dataset?",
"What is the perWhat are the tasks evaluated?",
"Are there privacy concerns with clinical data?",
"How they introduce domain-specific features into pre-trained language model?",
"How big is QA-CTS task dataset?",
"How big is dataset of pathology reports collected from Ruijing Hospital?",
"What are strong baseline models in specific tasks?"
],
"question_id": [
"71a7153e12879defa186bfb6dbafe79c74265e10",
"85d1831c28d3c19c84472589a252e28e9884500f",
"1959e0ebc21fafdf1dd20c6ea054161ba7446f61",
"77cf4379106463b6ebcb5eb8fa5bb25450fa5fb8",
"06095a4dee77e9a570837b35fc38e77228664f91",
"19c9cfbc4f29104200393e848b7b9be41913a7ac",
"6743c1dd7764fc652cfe2ea29097ea09b5544bc3",
"14323046220b2aea8f15fba86819cbccc389ed8b",
"08a5f8d36298b57f6a4fcb4b6ae5796dc5d944a4",
"975a4ac9773a4af551142c324b64a0858670d06e",
"326e08a0f5753b90622902bd4a9c94849a24b773",
"bd78483a746fda4805a7678286f82d9621bc45cf"
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"five",
"five",
"five",
"five",
"zero",
"zero",
"zero",
"zero"
],
"topic_background": [
"research",
"research",
"research",
"research",
"familiar",
"familiar",
"familiar",
"familiar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"question answering",
"question answering",
"question answering",
"question answering",
"Question Answering",
"Question Answering",
"Question Answering",
"Question Answering",
"",
"",
"",
""
],
"question_writer": [
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Chinese general corpus"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts."
],
"highlighted_evidence": [
"Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts."
]
},
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"0ab604dbe114dba174da645cc06a713e12a1fd9d",
"1f1495d06d0abe86ee52124ec9f2f0b25a536147"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"BERT-Base",
"QANet"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Experimental Studies ::: Comparison with State-of-the-art Methods",
"Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23."
],
"highlighted_evidence": [
"Experimental Studies ::: Comparison with State-of-the-art Methods\nSince BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large."
]
},
{
"unanswerable": false,
"extractive_spans": [
"QANet BIBREF39",
"BERT-Base BIBREF26"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.",
"FLOAT SELECTED: TABLE III COMPARATIVE RESULTS BETWEEN BERT AND OUR PROPOSED MODEL"
],
"highlighted_evidence": [
"Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. ",
"FLOAT SELECTED: TABLE III COMPARATIVE RESULTS BETWEEN BERT AND OUR PROPOSED MODEL"
]
}
],
"annotation_id": [
"0de2087bf0e46b14042de2a6e707bbf544a04556",
"c14d9acff1d3e6f47901e7104a7f01a10a727050"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained.",
"Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. "
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"FLOAT SELECTED: Fig. 1. An illustrative example of QA-CTS task.",
"Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.",
"To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows."
],
"highlighted_evidence": [
"FLOAT SELECTED: Fig. 1. An illustrative example of QA-CTS task.",
"Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.",
"Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. "
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "CTS is extracting structural data from medical research data (unstructured). Authors define QA-CTS task that aims to discover most related text from original text.",
"evidence": [
"Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.",
"However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.",
"To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows."
],
"highlighted_evidence": [
"Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly.",
"However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size).",
"To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text."
]
}
],
"annotation_id": [
"6d56080358bb7f22dd764934ffcd6d4e93fef0b2",
"da233cce57e642941da2446d3e053349c2ab1a15"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" three types of questions, namely tumor size, proximal resection margin and distal resection margin"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.",
"Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.",
"In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset."
],
"highlighted_evidence": [
"Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data.",
"All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. ",
"Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks."
]
},
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"7138d812ea70084e7610e5a2422039da1404afd7",
"b732d5561babcf37393ebf6cbb051d04b0b66bd5"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "the dataset consists of pathology reports including sentences and questions and answers about tumor size and resection margins so it does include additional sentences ",
"evidence": [
"Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20."
],
"highlighted_evidence": [
"Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. "
]
}
],
"annotation_id": [
"1d4d4965fd44fefbfed0b3267ef5875572994b66"
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "2,714 ",
"evidence": [
"Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20."
],
"highlighted_evidence": [
"Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs."
]
}
],
"annotation_id": [
"229cc59d1545c9e8f47d43053465e2dfd1b763cc"
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"e2fe2a3438f28758724d992502a44615051eda90"
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"2a73264b743b6dd183c200f7dcd04aed4029f015"
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"integrate clinical named entity information into pre-trained language model"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.",
"In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word."
],
"highlighted_evidence": [
"We also propose an effective model to integrate clinical named entity information into pre-trained language model.",
"In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text."
]
}
],
"annotation_id": [
"5f125408e657282669f90a1866d8227c0f94332e"
],
"worker_id": [
"e70d8110563d53282f1a26e823d27e6f235772db"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"17,833 sentences, 826,987 characters and 2,714 question-answer pairs"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20."
],
"highlighted_evidence": [
"Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. "
]
}
],
"annotation_id": [
"24c7023a5221b509d34dd6703d6e0607b2777e78"
],
"worker_id": [
"e70d8110563d53282f1a26e823d27e6f235772db"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"17,833 sentences, 826,987 characters and 2,714 question-answer pairs"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20."
],
"highlighted_evidence": [
"Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs."
]
}
],
"annotation_id": [
"d046d9ea83c5ffe607465e2fbc8817131c11e037"
],
"worker_id": [
"e70d8110563d53282f1a26e823d27e6f235772db"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23."
],
"highlighted_evidence": [
"Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large."
]
}
],
"annotation_id": [
"b3a3d6e707a67bab827053b40e446f30e416887f"
],
"worker_id": [
"e70d8110563d53282f1a26e823d27e6f235772db"
]
}
]
} | {
"caption": [
"Fig. 1. An illustrative example of QA-CTS task.",
"TABLE I AN ILLUSTRATIVE EXAMPLE OF NAMED ENTITY FEATURE TAGS",
"Fig. 2. The architecture of our proposed model for QA-CTS task",
"TABLE II STATISTICS OF DIFFERENT TYPES OF QUESTION ANSWER INSTANCES",
"TABLE V COMPARATIVE RESULTS FOR DIFFERENT INTEGRATION METHOD OF OUR PROPOSED MODEL",
"TABLE III COMPARATIVE RESULTS BETWEEN BERT AND OUR PROPOSED MODEL",
"TABLE VI COMPARATIVE RESULTS FOR DATA INTEGRATION ANALYSIS (WITHOUT TWO-STAGE TRAINING AND NAMED ENTITY INFORMATION)",
"TABLE VII COMPARATIVE RESULTS FOR DATA INTEGRATION ANALYSIS (WITH TWO-STAGE TRAINING AND NAMED ENTITY INFORMATION)",
"TABLE VIII COMPARATIVE RESULTS FOR DATA INTEGRATION ANALYSIS (USING MIXED-DATA PRE-TRAINED PARAMETERS)"
],
"file": [
"1-Figure1-1.png",
"2-TableI-1.png",
"3-Figure2-1.png",
"4-TableII-1.png",
"5-TableV-1.png",
"5-TableIII-1.png",
"6-TableVI-1.png",
"6-TableVII-1.png",
"6-TableVIII-1.png"
]
} |
1811.00942 | Progress and Tradeoffs in Neural Language Models | In recent years, we have witnessed a dramatic shift towards techniques driven by neural networks for a variety of NLP tasks. Undoubtedly, neural language models (NLMs) have reduced perplexity by impressive amounts. This progress, however, comes at a substantial cost in performance, in terms of inference latency and energy consumption, which is particularly of concern in deployments on mobile devices. This paper, which examines the quality-performance tradeoff of various language modeling techniques, represents to our knowledge the first to make this observation. We compare state-of-the-art NLMs with"classic"Kneser-Ney (KN) LMs in terms of energy usage, latency, perplexity, and prediction accuracy using two standard benchmarks. On a Raspberry Pi, we find that orders of increase in latency and energy usage correspond to less change in perplexity, while the difference is much less pronounced on a desktop. | {
"section_name": [
"Introduction",
"Background and Related Work",
"Experimental Setup",
"Hyperparameters and Training",
"Infrastructure",
"Results and Discussion",
"Conclusion"
],
"paragraphs": [
[
"Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 .",
"Specifically focused on language modeling, this paper examines an issue that to our knowledge has not been explored: advances in neural language models have come at a significant cost in terms of increased computational complexity. Computing the probability of a token sequence using non-neural techniques requires a number of phrase lookups and perhaps a few arithmetic operations, whereas model inference with NLMs require large matrix multiplications consuming perhaps millions of floating point operations (FLOPs). These performance tradeoffs are worth discussing.",
"In truth, language models exist in a quality–performance tradeoff space. As model quality increases (e.g., lower perplexity), performance as measured in terms of energy consumption, query latency, etc. tends to decrease. For applications primarily running in the cloud—say, machine translation—practitioners often solely optimize for the lowest perplexity. This is because such applications are embarrassingly parallel and hence trivial to scale in a data center environment.",
"There are, however, applications of NLMs that require less one-sided optimizations. On mobile devices such as smartphones and tablets, for example, NLMs may be integrated into software keyboards for next-word prediction, allowing much faster text entry. Popular Android apps that enthusiastically tout this technology include SwiftKey and Swype. The greater computational costs of NLMs lead to higher energy usage in model inference, translating into shorter battery life.",
"In this paper, we examine the quality–performance tradeoff in the shift from non-neural to neural language models. In particular, we compare Kneser–Ney smoothing, widely accepted as the state of the art prior to NLMs, to the best NLMs today. The decrease in perplexity on standard datasets has been well documented BIBREF3 , but to our knowledge no one has examined the performances tradeoffs. With deployment on a mobile device in mind, we evaluate energy usage and inference latency on a Raspberry Pi (which shares the same ARM architecture as nearly all smartphones today). We find that a 2.5 $\\times $ reduction in perplexity on PTB comes at a staggering cost in terms of performance: inference with NLMs takes 49 $\\times $ longer and requires 32 $\\times $ more energy. Furthermore, we find that impressive reductions in perplexity translate into at best modest improvements in next-word prediction, which is arguable a better metric for evaluating software keyboards on a smartphone. The contribution of this paper is the first known elucidation of this quality–performance tradeoff. Note that we refrain from prescriptive recommendations: whether or not a tradeoff is worthwhile depends on the application. Nevertheless, NLP engineers should arguably keep these tradeoffs in mind when selecting a particular operating point."
],
[
" BIBREF3 evaluate recent neural language models; however, their focus is not on the computational footprint of each model, but rather the perplexity. To further reduce perplexity, many neural language model extensions exist, such as continuous cache pointer BIBREF5 and mixture of softmaxes BIBREF6 . Since our focus is on comparing “core” neural and non-neural approaches, we disregard these extra optimizations techniques in all of our models.",
"Other work focus on designing lightweight models for resource-efficient inference on mobile devices. BIBREF7 explore LSTMs BIBREF8 with binary weights for language modeling; BIBREF9 examine shallow feedforward neural networks for natural language processing.",
"AWD-LSTM. BIBREF4 show that a simple three-layer LSTM, with proper regularization and optimization techniques, can achieve state of the art on various language modeling datasets, surpassing more complex models. Specifically, BIBREF4 apply randomized backpropagation through time, variational dropout, activation regularization, embedding dropout, and temporal activation regularization. A novel scheduler for optimization, non-monotonically triggered ASGD (NT-ASGD) is also introduced. BIBREF4 name their three-layer LSTM model trained with such tricks, “AWD-LSTM.”",
"Quasi-Recurrent Neural Networks. Quasi-recurrent neural networks (QRNNs; BIBREF10 ) achieve current state of the art in word-level language modeling BIBREF11 . A quasi-recurrent layer comprises two separate parts: a convolution layer with three weights, and a recurrent pooling layer. Given an input $\\mathbf {X} \\in \\mathbb {R}^{k \\times n}$ , the convolution layer is $\n\\mathbf {Z} = \\tanh (\\mathbf {W}_z \\cdot \\mathbf {X})\\\\\n\\mathbf {F} = \\sigma (\\mathbf {W}_f \\cdot \\mathbf {X})\\\\\n\\mathbf {O} = \\sigma (\\mathbf {W}_o \\cdot \\mathbf {X})\n$ ",
"where $\\sigma $ denotes the sigmoid function, $\\cdot $ represents masked convolution across time, and $\\mathbf {W}_{\\lbrace z, f, o\\rbrace } \\in \\mathbb {R}^{m \\times k \\times r}$ are convolution weights with $k$ input channels, $m$ output channels, and a window size of $r$ . In the recurrent pooling layer, the convolution outputs are combined sequentially: $\n\\mathbf {c}_t &= \\mathbf {f}_t \\odot \\mathbf {c}_{t-1} + (1 -\n\\mathbf {f}_t) \\odot \\mathbf {z}_t\\\\\n\\mathbf {h}_t &= \\mathbf {o}_t \\odot \\mathbf {c}_t\n$ ",
"Multiple QRNN layers can be stacked for deeper hierarchical representation, with the output $\\mathbf {h}_{1:t}$ being fed as the input into the subsequent layer: In language modeling, a four-layer QRNN is a standard architecture BIBREF11 .",
"Perplexity–Recall Scale. Word-level perplexity does not have a strictly monotonic relationship with recall-at- $k$ , the fraction of top $k$ predictions that contain the correct word. A given R@ $k$ imposes a weak minimum perplexity constraint—there are many free parameters that allow for large variability in the perplexity given a certain R@ $k$ . Consider the corpus, “choo choo train,” with an associated unigram model $P(\\text{``choo''}) = 0.1$ , $P(\\text{``train''}) = 0.9$ , resulting in an R@1 of $1/3$ and perplexity of $4.8$ . Clearly, R@1 $ =1/3$ for all $P(\\text{``choo''}) \\le 0.5$ ; thus, perplexity can drop as low as 2 without affecting recall."
],
[
"We conducted our experiments on Penn Treebank (PTB; BIBREF12 ) and WikiText-103 (WT103; BIBREF13 ). Preprocessed by BIBREF14 , PTB contains 887K tokens for training, 70K for validation, and 78K for test, with a vocabulary size of 10,000. On the other hand, WT103 comprises 103 million tokens for training, 217K for validation, and 245K for test, spanning a vocabulary of 267K unique tokens.",
"For the neural language model, we used a four-layer QRNN BIBREF10 , which achieves state-of-the-art results on a variety of datasets, such as WT103 BIBREF11 and PTB. To compare against more common LSTM architectures, we also evaluated AWD-LSTM BIBREF4 on PTB. For the non-neural approach, we used a standard five-gram model with modified Kneser-Ney smoothing BIBREF15 , as explored in BIBREF16 on PTB. We denote the QRNN models for PTB and WT103 as ptb-qrnn and wt103-qrnn, respectively.",
"For each model, we examined word-level perplexity, R@3 in next-word prediction, latency (ms/q), and energy usage (mJ/q). To explore the perplexity–recall relationship, we collected individual perplexity and recall statistics for each sentence in the test set."
],
[
"The QRNN models followed the exact training procedure and architecture delineated in the official codebase from BIBREF11 . For ptb-qrnn, we trained the model for 550 epochs using NT-ASGD BIBREF4 , then finetuned for 300 epochs using ASGD BIBREF17 , all with a learning rate of 30 throughout. For wt103-qrnn, we followed BIBREF11 and trained the QRNN for 14 epochs, using the Adam optimizer with a learning rate of $10^{-3}$ . We also applied regularization techniques from BIBREF4 ; all the specific hyperparameters are the same as those in the repository. Our model architecture consists of 400-dimensional tied embedding weights BIBREF18 and four QRNN layers, with 1550 hidden units per layer on PTB and 2500 per layer on WT103. Both QRNN models have window sizes of $r=2$ for the first layer and $r=1$ for the rest.",
"For the KN-5 model, we trained an off-the-shelf five-gram model using the popular SRILM toolkit BIBREF19 . We did not specify any special hyperparameters."
],
[
"We trained the QRNNs with PyTorch (0.4.0; commit 1807bac) on a Titan V GPU. To evaluate the models under a resource-constrained environment, we deployed them on a Raspberry Pi 3 (Model B) running Raspbian Stretch (4.9.41-v7+). The Raspberry Pi (RPi) is not only a standard platform, but also a close surrogate to mobile phones, using the same Cortex-A7 in many phones. We then transferred the trained models to the RPi, using the same frameworks for evaluation. We plugged the RPi into a Watts Up Pro meter, a power meter that can be read programatically over USB at a frequency of 1 Hz. For the QRNNs, we used the first 350 words of the test set, and averaged the ms/query and mJ/query. For KN-5, we used the entire test set for evaluation, since the latency was much lower. To adjust for the base power load, we subtracted idle power draw from energy usage.",
"For a different perspective, we further evaluated all the models under a desktop environment, using an i7-4790k CPU and Titan V GPU. Because the base power load for powering a desktop is much higher than running neural language models, we collected only latency statistics. We used the entire test set, since the QRNN runs quickly.",
"In addition to energy and latency, another consideration for the NLP developer selecting an operating point is the cost of underlying hardware. For our setup, the RPi costs $35 USD, the CPU costs $350 USD, and the GPU costs $3000 USD."
],
[
"To demonstrate the effectiveness of the QRNN models, we present the results of past and current state-of-the-art neural language models in Table 1 ; we report the Skip- and AWD-LSTM results as seen in the original papers, while we report our QRNN results. Skip LSTM denotes the four-layer Skip LSTM in BIBREF3 . BIBREF20 focus on Hebbian softmax, a model extension technique—Rae-LSTM refers to their base LSTM model without any extensions. In our results, KN-5 refers to the traditional five-gram model with modified Kneser-Ney smoothing, and AWD is shorthand for AWD-LSTM.",
"Perplexity–recall scale. In Figure 1 , using KN-5 as the model, we plot the log perplexity (cross entropy) and R@3 error ( $1 - \\text{R@3}$ ) for every sentence in PTB and WT103. The horizontal clusters arise from multiple perplexity points representing the same R@3 value, as explained in Section \"Infrastructure\" . We also observe that the perplexity–recall scale is non-linear—instead, log perplexity appears to have a moderate linear relationship with R@3 error on PTB ( $r=0.85$ ), and an even stronger relationship on WT103 ( $r=0.94$ ). This is partially explained by WT103 having much longer sentences, and thus less noisy statistics.",
"From Figure 1 , we find that QRNN models yield strongly linear log perplexity–recall plots as well, where $r=0.88$ and $r=0.93$ for PTB and WT103, respectively. Note that, due to the improved model quality over KN-5, the point clouds are shifted downward compared to Figure 1 . We conclude that log perplexity, or cross entropy, provides a more human-understandable indicator of R@3 than perplexity does. Overall, these findings agree with those from BIBREF21 , which explores the log perplexity–word error rate scale in language modeling for speech recognition.",
"Quality–performance tradeoff. In Table 2 , from left to right, we report perplexity results on the validation and test sets, R@3 on test, and finally per-query latency and energy usage. On the RPi, KN-5 is both fast and power-efficient to run, using only about 7 ms/query and 6 mJ/query for PTB (Table 2 , row 1), and 264 ms/q and 229 mJ/q on WT103 (row 5). Taking 220 ms/query and consuming 300 mJ/query, AWD-LSTM and ptb-qrnn are still viable for mobile phones: The modern smartphone holds upwards of 10,000 joules BIBREF22 , and the latency is within usability standards BIBREF23 . Nevertheless, the models are still 49 $\\times $ slower and 32 $\\times $ more power-hungry than KN-5. The wt103-qrnn model is completely unusable on phones, taking over 1.2 seconds per next-word prediction. Neural models achieve perplexity drops of 60–80% and R@3 increases of 22–34%, but these improvements come at a much higher cost in latency and energy usage.",
"In Table 2 (last two columns), the desktop yields very different results: the neural models on PTB (rows 2–3) are 9 $\\times $ slower than KN-5, but the absolute latency is only 8 ms/q, which is still much faster than what humans perceive as instantaneous BIBREF23 . If a high-end commodity GPU is available, then the models are only twice as slow as KN-5 is. From row 5, even better results are noted with wt103-qrnn: On the CPU, the QRNN is only 60% slower than KN-5 is, while the model is faster by 11 $\\times $ on a GPU. These results suggest that, if only latency is considered under a commodity desktop environment, the QRNN model is humanly indistinguishable from the KN-5 model, even without using GPU acceleration."
],
[
"In the present work, we describe and examine the tradeoff space between quality and performance for the task of language modeling. Specifically, we explore the quality–performance tradeoffs between KN-5, a non-neural approach, and AWD-LSTM and QRNN, two neural language models. We find that with decreased perplexity comes vastly increased computational requirements: In one of the NLMs, a perplexity reduction by 2.5 $\\times $ results in a 49 $\\times $ rise in latency and 32 $\\times $ increase in energy usage, when compared to KN-5."
]
]
} | {
"question": [
"What aspects have been compared between various language models?",
"what classic language models are mentioned in the paper?",
"What is a commonly used evaluation metric for language models?"
],
"question_id": [
"dd155f01f6f4a14f9d25afc97504aefdc6d29c13",
"a9d530d68fb45b52d9bad9da2cd139db5a4b2f7c",
"e07df8f613dbd567a35318cd6f6f4cb959f5c82d"
],
"nlp_background": [
"two",
"two",
"two"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Quality measures using perplexity and recall, and performance measured using latency and energy usage. ",
"evidence": [
"For each model, we examined word-level perplexity, R@3 in next-word prediction, latency (ms/q), and energy usage (mJ/q). To explore the perplexity–recall relationship, we collected individual perplexity and recall statistics for each sentence in the test set."
],
"highlighted_evidence": [
"For each model, we examined word-level perplexity, R@3 in next-word prediction, latency (ms/q), and energy usage (mJ/q). To explore the perplexity–recall relationship, we collected individual perplexity and recall statistics for each sentence in the test set."
]
}
],
"annotation_id": [
"c17796e0bd3bfcc64d5a8e844d23d8d39274af6b"
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Kneser–Ney smoothing"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In this paper, we examine the quality–performance tradeoff in the shift from non-neural to neural language models. In particular, we compare Kneser–Ney smoothing, widely accepted as the state of the art prior to NLMs, to the best NLMs today. The decrease in perplexity on standard datasets has been well documented BIBREF3 , but to our knowledge no one has examined the performances tradeoffs. With deployment on a mobile device in mind, we evaluate energy usage and inference latency on a Raspberry Pi (which shares the same ARM architecture as nearly all smartphones today). We find that a 2.5 $\\times $ reduction in perplexity on PTB comes at a staggering cost in terms of performance: inference with NLMs takes 49 $\\times $ longer and requires 32 $\\times $ more energy. Furthermore, we find that impressive reductions in perplexity translate into at best modest improvements in next-word prediction, which is arguable a better metric for evaluating software keyboards on a smartphone. The contribution of this paper is the first known elucidation of this quality–performance tradeoff. Note that we refrain from prescriptive recommendations: whether or not a tradeoff is worthwhile depends on the application. Nevertheless, NLP engineers should arguably keep these tradeoffs in mind when selecting a particular operating point."
],
"highlighted_evidence": [
"Kneser–Ney smoothing",
"In particular, we compare Kneser–Ney smoothing, widely accepted as the state of the art prior to NLMs, to the best NLMs today."
]
}
],
"annotation_id": [
"715840b32a89c33e0a1de1ab913664eb9694bd34"
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"perplexity"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 ."
],
"highlighted_evidence": [
"Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 ."
]
},
{
"unanswerable": false,
"extractive_spans": [
"perplexity"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 ."
],
"highlighted_evidence": [
"recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 ."
]
}
],
"annotation_id": [
"062dcccfdfb5af1c6ee886885703f9437d91a9dc",
"1cc952fc047d0bb1a961c3ce65bada2e983150d1"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
}
]
} | {
"caption": [
"Table 1: Comparison of neural language models on Penn Treebank and WikiText-103.",
"Figure 1: Log perplexity–recall error with KN-5.",
"Figure 2: Log perplexity–recall error with QRNN.",
"Table 2: Language modeling results on performance and model quality."
],
"file": [
"3-Table1-1.png",
"4-Figure1-1.png",
"4-Figure2-1.png",
"4-Table2-1.png"
]
} |
1805.02400 | Stay On-Topic: Generating Context-specific Fake Restaurant Reviews | Automatically generated fake restaurant reviews are a threat to online review systems. Recent research has shown that users have difficulties in detecting machine-generated fake reviews hiding among real restaurant reviews. The method used in this work (char-LSTM ) has one drawback: it has difficulties staying in context, i.e. when it generates a review for specific target entity, the resulting review may contain phrases that are unrelated to the target, thus increasing its detectability. In this work, we present and evaluate a more sophisticated technique based on neural machine translation (NMT) with which we can generate reviews that stay on-topic. We test multiple variants of our technique using native English speakers on Amazon Mechanical Turk. We demonstrate that reviews generated by the best variant have almost optimal undetectability (class-averaged F-score 47%). We conduct a user study with skeptical users and show that our method evades detection more frequently compared to the state-of-the-art (average evasion 3.2/4 vs 1.5/4) with statistical significance, at level {\alpha} = 1% (Section 4.3). We develop very effective detection tools and reach average F-score of 97% in classifying these. Although fake reviews are very effective in fooling people, effective automatic detection is still feasible. | {
"section_name": [
"Introduction",
"Background",
"System Model",
"Attack Model",
"Generative Model"
],
"paragraphs": [
[
"Automatically generated fake reviews have only recently become natural enough to fool human readers. Yao et al. BIBREF0 use a deep neural network (a so-called 2-layer LSTM BIBREF1 ) to generate fake reviews, and concluded that these fake reviews look sufficiently genuine to fool native English speakers. They train their model using real restaurant reviews from yelp.com BIBREF2 . Once trained, the model is used to generate reviews character-by-character. Due to the generation methodology, it cannot be easily targeted for a specific context (meaningful side information). Consequently, the review generation process may stray off-topic. For instance, when generating a review for a Japanese restaurant in Las Vegas, the review generation process may include references to an Italian restaurant in Baltimore. The authors of BIBREF0 apply a post-processing step (customization), which replaces food-related words with more suitable ones (sampled from the targeted restaurant). The word replacement strategy has drawbacks: it can miss certain words and replace others independent of their surrounding words, which may alert savvy readers. As an example: when we applied the customization technique described in BIBREF0 to a review for a Japanese restaurant it changed the snippet garlic knots for breakfast with garlic knots for sushi).",
"We propose a methodology based on neural machine translation (NMT) that improves the generation process by defining a context for the each generated fake review. Our context is a clear-text sequence of: the review rating, restaurant name, city, state and food tags (e.g. Japanese, Italian). We show that our technique generates review that stay on topic. We can instantiate our basic technique into several variants. We vet them on Amazon Mechanical Turk and find that native English speakers are very poor at recognizing our fake generated reviews. For one variant, the participants' performance is close to random: the class-averaged F-score of detection is INLINEFORM0 (whereas random would be INLINEFORM1 given the 1:6 imbalance in the test). Via a user study with experienced, highly educated participants, we compare this variant (which we will henceforth refer to as NMT-Fake* reviews) with fake reviews generated using the char-LSTM-based technique from BIBREF0 .",
"We demonstrate that NMT-Fake* reviews constitute a new category of fake reviews that cannot be detected by classifiers trained only using previously known categories of fake reviews BIBREF0 , BIBREF3 , BIBREF4 . Therefore, NMT-Fake* reviews may go undetected in existing online review sites. To meet this challenge, we develop an effective classifier that detects NMT-Fake* reviews effectively (97% F-score). Our main contributions are:"
],
[
"Fake reviews User-generated content BIBREF5 is an integral part of the contemporary user experience on the web. Sites like tripadvisor.com, yelp.com and Google Play use user-written reviews to provide rich information that helps other users choose where to spend money and time. User reviews are used for rating services or products, and for providing qualitative opinions. User reviews and ratings may be used to rank services in recommendations. Ratings have an affect on the outwards appearance. Already 8 years ago, researchers estimated that a one-star rating increase affects the business revenue by 5 – 9% on yelp.com BIBREF6 .",
"Due to monetary impact of user-generated content, some businesses have relied on so-called crowd-turfing agents BIBREF7 that promise to deliver positive ratings written by workers to a customer in exchange for a monetary compensation. Crowd-turfing ethics are complicated. For example, Amazon community guidelines prohibit buying content relating to promotions, but the act of writing fabricated content is not considered illegal, nor is matching workers to customers BIBREF8 . Year 2015, approximately 20% of online reviews on yelp.com were suspected of being fake BIBREF9 .",
"Nowadays, user-generated review sites like yelp.com use filters and fraudulent review detection techniques. These factors have resulted in an increase in the requirements of crowd-turfed reviews provided to review sites, which in turn has led to an increase in the cost of high-quality review. Due to the cost increase, researchers hypothesize the existence of neural network-generated fake reviews. These neural-network-based fake reviews are statistically different from human-written fake reviews, and are not caught by classifiers trained on these BIBREF0 .",
"Detecting fake reviews can either be done on an individual level or as a system-wide detection tool (i.e. regulation). Detecting fake online content on a personal level requires knowledge and skills in critical reading. In 2017, the National Literacy Trust assessed that young people in the UK do not have the skillset to differentiate fake news from real news BIBREF10 . For example, 20% of children that use online news sites in age group 12-15 believe that all information on news sites are true.",
"Neural Networks Neural networks are function compositions that map input data through INLINEFORM0 subsequent layers: DISPLAYFORM0 ",
"where the functions INLINEFORM0 are typically non-linear and chosen by experts partly for known good performance on datasets and partly for simplicity of computational evaluation. Language models (LMs) BIBREF11 are generative probability distributions that assign probabilities to sequences of tokens ( INLINEFORM1 ): DISPLAYFORM0 ",
"such that the language model can be used to predict how likely a specific token at time step INLINEFORM0 is, based on the INLINEFORM1 previous tokens. Tokens are typically either words or characters.",
"For decades, deep neural networks were thought to be computationally too difficult to train. However, advances in optimization, hardware and the availability of frameworks have shown otherwise BIBREF1 , BIBREF12 . Neural language models (NLMs) have been one of the promising application areas. NLMs are typically various forms of recurrent neural networks (RNNs), which pass through the data sequentially and maintain a memory representation of the past tokens with a hidden context vector. There are many RNN architectures that focus on different ways of updating and maintaining context vectors: Long Short-Term Memory units (LSTM) and Gated Recurrent Units (GRUs) are perhaps most popular. Neural LMs have been used for free-form text generation. In certain application areas, the quality has been high enough to sometimes fool human readers BIBREF0 . Encoder-decoder (seq2seq) models BIBREF13 are architectures of stacked RNNs, which have the ability to generate output sequences based on input sequences. The encoder network reads in a sequence of tokens, and passes it to a decoder network (a LM). In contrast to simpler NLMs, encoder-decoder networks have the ability to use additional context for generating text, which enables more accurate generation of text. Encoder-decoder models are integral in Neural Machine Translation (NMT) BIBREF14 , where the task is to translate a source text from one language to another language. NMT models additionally use beam search strategies to heuristically search the set of possible translations. Training datasets are parallel corpora; large sets of paired sentences in the source and target languages. The application of NMT techniques for online machine translation has significantly improved the quality of translations, bringing it closer to human performance BIBREF15 .",
"Neural machine translation models are efficient at mapping one expression to another (one-to-one mapping). Researchers have evaluated these models for conversation generation BIBREF16 , with mixed results. Some researchers attribute poor performance to the use of the negative log likelihood cost function during training, which emphasizes generation of high-confidence phrases rather than diverse phrases BIBREF17 . The results are often generic text, which lacks variation. Li et al. have suggested various augmentations to this, among others suppressing typical responses in the decoder language model to promote response diversity BIBREF17 ."
],
[
"We discuss the attack model, our generative machine learning method and controlling the generative process in this section."
],
[
"Wang et al. BIBREF7 described a model of crowd-turfing attacks consisting of three entities: customers who desire to have fake reviews for a particular target (e.g. their restaurant) on a particular platform (e.g. Yelp), agents who offer fake review services to customers, and workers who are orchestrated by the agent to compose and post fake reviews.",
"Automated crowd-turfing attacks (ACA) replace workers by a generative model. This has several benefits including better economy and scalability (human workers are more expensive and slower) and reduced detectability (agent can better control the rate at which fake reviews are generated and posted).",
"We assume that the agent has access to public reviews on the review platform, by which it can train its generative model. We also assume that it is easy for the agent to create a large number of accounts on the review platform so that account-based detection or rate-limiting techniques are ineffective against fake reviews.",
"The quality of the generative model plays a crucial role in the attack. Yao et al. BIBREF0 propose the use of a character-based LSTM as base for generative model. LSTMs are not conditioned to generate reviews for a specific target BIBREF1 , and may mix-up concepts from different contexts during free-form generation. Mixing contextually separate words is one of the key criteria that humans use to identify fake reviews. These may result in violations of known indicators for fake content BIBREF18 . For example, the review content may not match prior expectations nor the information need that the reader has. We improve the attack model by considering a more capable generative model that produces more appropriate reviews: a neural machine translation (NMT) model."
],
[
"We propose the use of NMT models for fake review generation. The method has several benefits: 1) the ability to learn how to associate context (keywords) to reviews, 2) fast training time, and 3) a high-degree of customization during production time, e.g. introduction of specific waiter or food items names into reviews.",
"NMT models are constructions of stacked recurrent neural networks (RNNs). They include an encoder network and a decoder network, which are jointly optimized to produce a translation of one sequence to another. The encoder rolls over the input data in sequence and produces one INLINEFORM0 -dimensional context vector representation for the sentence. The decoder then generates output sequences based on the embedding vector and an attention module, which is taught to associate output words with certain input words. The generation typically continues until a specific EOS (end of sentence) token is encountered. The review length can be controlled in many ways, e.g. by setting the probability of generating the EOS token to zero until the required length is reached.",
"NMT models often also include a beam search BIBREF14 , which generates several hypotheses and chooses the best ones amongst them. In our work, we use the greedy beam search technique. We forgo the use of additional beam searches as we found that the quality of the output was already adequate and the translation phase time consumption increases linearly for each beam used.",
"We use the Yelp Challenge dataset BIBREF2 for our fake review generation. The dataset (Aug 2017) contains 2.9 million 1 –5 star restaurant reviews. We treat all reviews as genuine human-written reviews for the purpose of this work, since wide-scale deployment of machine-generated review attacks are not yet reported (Sep 2017) BIBREF19 . As preprocessing, we remove non-printable (non-ASCII) characters and excessive white-space. We separate punctuation from words. We reserve 15,000 reviews for validation and 3,000 for testing, and the rest we use for training. NMT models require a parallel corpus of source and target sentences, i.e. a large set of (source, target)-pairs. We set up a parallel corpus by constructing (context, review)-pairs from the dataset. Next, we describe how we created our input context.",
"The Yelp Challenge dataset includes metadata about restaurants, including their names, food tags, cities and states these restaurants are located in. For each restaurant review, we fetch this metadata and use it as our input context in the NMT model. The corresponding restaurant review is similarly set as the target sentence. This method produced 2.9 million pairs of sentences in our parallel corpus. We show one example of the parallel training corpus in Example 1 below:",
"5 Public House Las Vegas NV Gastropubs Restaurants > Excellent",
"food and service . Pricey , but well worth it . I would recommend",
"the bone marrow and sampler platter for appetizers . \\end{verbatim}",
" ",
" ",
"\\noindent The order {\\textbf{[rating name city state tags]}} is kept constant.",
"Training the model conditions it to associate certain sequences of words in the input sentence with others in the output.",
" ",
"\\subsubsection{Training Settings}",
" ",
"We train our NMT model on a commodity PC with a i7-4790k CPU (4.00GHz), with 32GB RAM and one NVidia GeForce GTX 980 GPU. Our system can process approximately 1,300 \\textendash 1,500 source tokens/s and approximately 5,730 \\textendash 5,830 output tokens/s. Training one epoch takes in average 72 minutes. The model is trained for 8 epochs, i.e. over night. We call fake review generated by this model \\emph{NMT-Fake reviews}. We only need to train one model to produce reviews of different ratings.",
"We use the training settings: adam optimizer \\cite{kingma2014adam} with the suggested learning rate 0.001 \\cite{klein2017opennmt}. For most parts, parameters are at their default values. Notably, the maximum sentence length of input and output is 50 tokens by default.",
"We leverage the framework openNMT-py \\cite{klein2017opennmt} to teach the our NMT model.",
"We list used openNMT-py commands in Appendix Table~\\ref{table:openNMT-py_commands}.",
" ",
"\\begin{figure}[t]",
"\\begin{center}",
" \\begin{tabular}{ | l | }",
" \\hline",
"Example 2. Greedy NMT \\\\",
"Great food, \\underline{great} service, \\underline{great} \\textit{\\textit{beer selection}}. I had the \\textit{Gastropubs burger} and it",
"\\\\",
"was delicious. The \\underline{\\textit{beer selection}} was also \\underline{great}. \\\\",
"\\\\",
"Example 3. NMT-Fake* \\\\",
"I love this restaurant. Great food, great service. It's \\textit{a little pricy} but worth\\\\",
"it for the \\textit{quality} of the \\textit{beer} and atmosphere you can see in \\textit{Vegas}",
"\\\\",
" \\hline",
" \\end{tabular}",
" \\label{table:output_comparison}",
"\\end{center}",
"\\caption{Na\\\"{i}ve text generation with NMT vs. generation using our NTM model. Repetitive patterns are \\underline{underlined}. Contextual words are \\emph{italicized}. Both examples here are generated based on the context given in Example~1.}",
"\\label{fig:comparison}",
"\\end{figure}",
" ",
"\\subsection{Controlling generation of fake reviews}",
"\\label{sec:generating}",
" ",
"Greedy NMT beam searches are practical in many NMT cases. However, the results are simply repetitive, when naively applied to fake review generation (See Example~2 in Figure~\\ref{fig:comparison}).",
"The NMT model produces many \\emph{high-confidence} word predictions, which are repetitive and obviously fake. We calculated that in fact, 43\\% of the generated sentences started with the phrase ``Great food''. The lack of diversity in greedy use of NMTs for text generation is clear.",
" ",
" ",
"\\begin{algorithm}[!b]",
" \\KwData{Desired review context $C_\\mathrm{input}$ (given as cleartext), NMT model}",
" \\KwResult{Generated review $out$ for input context $C_\\mathrm{input}$}",
"set $b=0.3$, $\\lambda=-5$, $\\alpha=\\frac{2}{3}$, $p_\\mathrm{typo}$, $p_\\mathrm{spell}$ \\\\",
"$\\log p \\leftarrow \\text{NMT.decode(NMT.encode(}C_\\mathrm{input}\\text{))}$ \\\\",
"out $\\leftarrow$ [~] \\\\",
"$i \\leftarrow 0$ \\\\",
"$\\log p \\leftarrow \\text{Augment}(\\log p$, $b$, $\\lambda$, $1$, $[~]$, 0)~~~~~~~~~~~~~~~ |~random penalty~\\\\",
"\\While{$i=0$ or $o_i$ not EOS}{",
"$\\log \\Tilde{p} \\leftarrow \\text{Augment}(\\log p$, $b$, $\\lambda$, $\\alpha$, $o_i$, $i$)~~~~~~~~~~~ |~start \\& memory penalty~\\\\",
"$o_i \\leftarrow$ \\text{NMT.beam}($\\log \\Tilde{p}$, out) \\\\",
"out.append($o_i$) \\\\",
"$i \\leftarrow i+1$",
"}\\text{return}~$\\text{Obfuscate}$(out,~$p_\\mathrm{typo}$,~$p_\\mathrm{spell}$)",
"\\caption{Generation of NMT-Fake* reviews.}",
"\\label{alg:base}",
"\\end{algorithm}",
" ",
"In this work, we describe how we succeeded in creating more diverse and less repetitive generated reviews, such as Example 3 in Figure~\\ref{fig:comparison}.",
"We outline pseudocode for our methodology of generating fake reviews in Algorithm~\\ref{alg:base}. There are several parameters in our algorithm.",
"The details of the algorithm will be shown later.",
"We modify the openNMT-py translation phase by changing log-probabilities before passing them to the beam search.",
"We notice that reviews generated with openNMT-py contain almost no language errors. As an optional post-processing step, we obfuscate reviews by introducing natural typos/misspellings randomly. In the next sections, we describe how we succeeded in generating more natural sentences from our NMT model, i.e. generating reviews like Example~3 instead of reviews like Example~2.",
" ",
"\\subsubsection{Variation in word content}",
" ",
"Example 2 in Figure~\\ref{fig:comparison} repeats commonly occurring words given for a specific context (e.g. \\textit{great, food, service, beer, selection, burger} for Example~1). Generic review generation can be avoided by decreasing probabilities (log-likelihoods \\cite{murphy2012machine}) of the generators LM, the decoder.",
"We constrain the generation of sentences by randomly \\emph{imposing penalties to words}.",
"We tried several forms of added randomness, and found that adding constant penalties to a \\emph{random subset} of the target words resulted in the most natural sentence flow. We call these penalties \\emph{Bernoulli penalties}, since the random variables are chosen as either 1 or 0 (on or off).",
" ",
" ",
"\\paragraph{Bernoulli penalties to language model}",
"To avoid generic sentences components, we augment the default language model $p(\\cdot)$ of the decoder by",
" ",
"\\begin{equation}",
"\\log \\Tilde{p}(t_k) = \\log p(t_k | t_i, \\dots, t_1) + \\lambda q,",
"\\end{equation}",
" ",
"where $q \\in R^{V}$ is a vector of Bernoulli-distributed random values that obtain values $1$ with probability $b$ and value $0$ with probability $1-b_i$, and $\\lambda < 0$. Parameter $b$ controls how much of the vocabulary is forgotten and $\\lambda$ is a soft penalty of including ``forgotten'' words in a review.",
"$\\lambda q_k$ emphasizes sentence forming with non-penalized words. The randomness is reset at the start of generating a new review.",
"Using Bernoulli penalties in the language model, we can ``forget'' a certain proportion of words and essentially ``force'' the creation of less typical sentences. We will test the effect of these two parameters, the Bernoulli probability $b$ and log-likelihood penalty of including ``forgotten'' words $\\lambda$, with a user study in Section~\\ref{sec:varying}.",
" ",
"\\paragraph{Start penalty}",
"We introduce start penalties to avoid generic sentence starts (e.g. ``Great food, great service''). Inspired by \\cite{li2016diversity}, we add a random start penalty $\\lambda s^\\mathrm{i}$, to our language model, which decreases monotonically for each generated token. We set $\\alpha \\leftarrow 0.66$ as it's effect decreases by 90\\% every 5 words generated.",
" ",
"\\paragraph{Penalty for reusing words}",
"Bernoulli penalties do not prevent excessive use of certain words in a sentence (such as \\textit{great} in Example~2).",
"To avoid excessive reuse of words, we included a memory penalty for previously used words in each translation.",
"Concretely, we add the penalty $\\lambda$ to each word that has been generated by the greedy search.",
" ",
"\\subsubsection{Improving sentence coherence}",
"\\label{sec:grammar}",
"We visually analyzed reviews after applying these penalties to our NMT model. While the models were clearly diverse, they were \\emph{incoherent}: the introduction of random penalties had degraded the grammaticality of the sentences. Amongst others, the use of punctuation was erratic, and pronouns were used semantically wrongly (e.g. \\emph{he}, \\emph{she} might be replaced, as could ``and''/``but''). To improve the authenticity of our reviews, we added several \\emph{grammar-based rules}.",
" ",
"English language has several classes of words which are important for the natural flow of sentences.",
"We built a list of common pronouns (e.g. I, them, our), conjunctions (e.g. and, thus, if), punctuation (e.g. ,/.,..), and apply only half memory penalties for these words. We found that this change made the reviews more coherent. The pseudocode for this and the previous step is shown in Algorithm~\\ref{alg:aug}.",
"The combined effect of grammar-based rules and LM augmentation is visible in Example~3, Figure~\\ref{fig:comparison}.",
" ",
"\\begin{algorithm}[!t]",
" \\KwData{Initial log LM $\\log p$, Bernoulli probability $b$, soft-penalty $\\lambda$, monotonic factor $\\alpha$, last generated token $o_i$, grammar rules set $G$}",
" \\KwResult{Augmented log LM $\\log \\Tilde{p}$}",
"\\begin{algorithmic}[1]",
"\\Procedure {Augment}{$\\log p$, $b$, $\\lambda$, $\\alpha$, $o_i$, $i$}{ \\\\",
"generate $P_{\\mathrm{1:N}} \\leftarrow Bernoulli(b)$~~~~~~~~~~~~~~~|~$\\text{One value} \\in \\{0,1\\}~\\text{per token}$~ \\\\",
"$I \\leftarrow P>0$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~|~Select positive indices~\\\\",
"$\\log \\Tilde{p} \\leftarrow$ $\\text{Discount}$($\\log p$, $I$, $\\lambda \\cdot \\alpha^i$,$G$) ~~~~~~ |~start penalty~\\\\",
"$\\log \\Tilde{p} \\leftarrow$ $\\text{Discount}$($\\log \\Tilde{p}$, $[o_i]$, $\\lambda$,$G$) ~~~~~~~~~ |~memory penalty~\\\\",
"\\textbf{return}~$\\log \\Tilde{p}$",
"}",
"\\EndProcedure",
"\\\\",
"\\Procedure {Discount}{$\\log p$, $I$, $\\lambda$, $G$}{",
"\\State{\\For{$i \\in I$}{",
"\\eIf{$o_i \\in G$}{",
"$\\log p_{i} \\leftarrow \\log p_{i} + \\lambda/2$",
"}{",
"$\\log p_{i} \\leftarrow \\log p_{i} + \\lambda$}",
"}\\textbf{return}~$\\log p$",
"\\EndProcedure",
"}}",
"\\end{algorithmic}",
"\\caption{Pseudocode for augmenting language model. }",
"\\label{alg:aug}",
"\\end{algorithm}",
" ",
"\\subsubsection{Human-like errors}",
"\\label{sec:obfuscation}",
"We notice that our NMT model produces reviews without grammar mistakes.",
"This is unlike real human writers, whose sentences contain two types of language mistakes 1) \\emph{typos} that are caused by mistakes in the human motoric input, and 2) \\emph{common spelling mistakes}.",
"We scraped a list of common English language spelling mistakes from Oxford dictionary\\footnote{\\url{https://en.oxforddictionaries.com/spelling/common-misspellings}} and created 80 rules for randomly \\emph{re-introducing spelling mistakes}.",
"Similarly, typos are randomly reintroduced based on the weighted edit distance\\footnote{\\url{https://pypi.python.org/pypi/weighted-levenshtein/0.1}}, such that typos resulting in real English words with small perturbations are emphasized.",
"We use autocorrection tools\\footnote{\\url{https://pypi.python.org/pypi/autocorrect/0.1.0}} for finding these words.",
"We call these augmentations \\emph{obfuscations}, since they aim to confound the reader to think a human has written them. We omit the pseudocode description for brevity.",
" ",
"\\subsection{Experiment: Varying generation parameters in our NMT model}",
"\\label{sec:varying}",
" ",
"Parameters $b$ and $\\lambda$ control different aspects in fake reviews.",
"We show six different examples of generated fake reviews in Table~\\ref{table:categories}.",
"Here, the largest differences occur with increasing values of $b$: visibly, the restaurant reviews become more extreme.",
"This occurs because a large portion of vocabulary is ``forgotten''. Reviews with $b \\geq 0.7$ contain more rare word combinations, e.g. ``!!!!!'' as punctuation, and they occasionally break grammaticality (''experience was awesome'').",
"Reviews with lower $b$ are more generic: they contain safe word combinations like ``Great place, good service'' that occur in many reviews. Parameter $\\lambda$'s is more subtle: it affects how random review starts are and to a degree, the discontinuation between statements within the review.",
"We conducted an Amazon Mechanical Turk (MTurk) survey in order to determine what kind of NMT-Fake reviews are convincing to native English speakers. We describe the survey and results in the next section.",
" ",
" ",
"\\begin{table}[!b]",
"\\caption{Six different parametrizations of our NMT reviews and one example for each. The context is ``5 P~.~F~.~Chang ' s Scottsdale AZ'' in all examples.}",
"\\begin{center}",
" \\begin{tabular}{ | l | l | }",
" \\hline",
" $(b, \\lambda)$ & Example review for context \\\\ \\hline",
" \\hline",
" $(0.3, -3)$ & I love this location! Great service, great food and the best drinks in Scottsdale. \\\\",
" & The staff is very friendly and always remembers u when we come in\\\\\\hline",
" $(0.3, -5)$ & Love love the food here! I always go for lunch. They have a great menu and \\\\",
" & they make it fresh to order. Great place, good service and nice staff\\\\\\hline",
" $(0.5, -4)$ & I love their chicken lettuce wraps and fried rice!! The service is good, they are\\\\",
" & always so polite. They have great happy hour specials and they have a lot\\\\",
" & of options.\\\\\\hline",
" $(0.7, -3)$ & Great place to go with friends! They always make sure your dining \\\\",
" & experience was awesome.\\\\ \\hline",
" $(0.7, -5)$ & Still haven't ordered an entree before but today we tried them once..\\\\",
" & both of us love this restaurant....\\\\\\hline",
" $(0.9, -4)$ & AMAZING!!!!! Food was awesome with excellent service. Loved the lettuce \\\\",
" & wraps. Great drinks and wine! Can't wait to go back so soon!!\\\\ \\hline",
" \\end{tabular}",
" \\label{table:categories}",
"\\end{center}",
"\\end{table}",
" ",
"\\subsubsection{MTurk study}",
"\\label{sec:amt}",
"We created 20 jobs, each with 100 questions, and requested master workers in MTurk to complete the jobs.",
"We randomly generated each survey for the participants. Each review had a 50\\% chance to be real or fake. The fake ones further were chosen among six (6) categories of fake reviews (Table~\\ref{table:categories}).",
"The restaurant and the city was given as contextual information to the participants. Our aim was to use this survey to understand how well English-speakers react to different parametrizations of NMT-Fake reviews.",
"Table~\\ref{table:amt_pop} in Appendix summarizes the statistics for respondents in the survey. All participants were native English speakers from America. The base rate (50\\%) was revealed to the participants prior to the study.",
" ",
"We first investigated overall detection of any NMT-Fake reviews (1,006 fake reviews and 994 real reviews). We found that the participants had big difficulties in detecting our fake reviews. In average, the reviews were detected with class-averaged \\emph{F-score of only 56\\%}, with 53\\% F-score for fake review detection and 59\\% F-score for real review detection. The results are very close to \\emph{random detection}, where precision, recall and F-score would each be 50\\%. Results are recorded in Table~\\ref{table:MTurk_super}. Overall, the fake review generation is very successful, since human detection rate across categories is close to random.",
" ",
"\\begin{table}[t]",
"\\caption{Effectiveness of Mechanical Turkers in distinguishing human-written reviews from fake reviews generated by our NMT model (all variants).}",
"\\begin{center}",
" \\begin{tabular}{ | c | c |c |c | c | }",
" \\hline",
" \\multicolumn{5}{|c|}{Classification report}",
" \\\\ \\hline",
" Review Type & Precision & Recall & F-score & Support \\\\ \\hline",
" \\hline",
" Human & 55\\% & 63\\% & 59\\% & 994\\\\",
" NMT-Fake & 57\\% & 50\\% & 53\\% & 1006 \\\\",
" \\hline",
" \\end{tabular}",
" \\label{table:MTurk_super}",
"\\end{center}",
"\\end{table}",
" ",
"We noticed some variation in the detection of different fake review categories. The respondents in our MTurk survey had most difficulties recognizing reviews of category $(b=0.3, \\lambda=-5)$, where true positive rate was $40.4\\%$, while the true negative rate of the real class was $62.7\\%$. The precision were $16\\%$ and $86\\%$, respectively. The class-averaged F-score is $47.6\\%$, which is close to random. Detailed classification reports are shown in Table~\\ref{table:MTurk_sub} in Appendix. Our MTurk-study shows that \\emph{our NMT-Fake reviews pose a significant threat to review systems}, since \\emph{ordinary native English-speakers have very big difficulties in separating real reviews from fake reviews}. We use the review category $(b=0.3, \\lambda=-5)$ for future user tests in this paper, since MTurk participants had most difficulties detecting these reviews. We refer to this category as NMT-Fake* in this paper.",
" ",
"\\section{Evaluation}",
"\\graphicspath{ {figures/}}",
" ",
"We evaluate our fake reviews by first comparing them statistically to previously proposed types of fake reviews, and proceed with a user study with experienced participants. We demonstrate the statistical difference to existing fake review types \\cite{yao2017automated,mukherjee2013yelp,rayana2015collective} by training classifiers to detect previous types and investigate classification performance.",
" ",
"\\subsection{Replication of state-of-the-art model: LSTM}",
"\\label{sec:repl}",
" ",
"Yao et al. \\cite{yao2017automated} presented the current state-of-the-art generative model for fake reviews. The model is trained over the Yelp Challenge dataset using a two-layer character-based LSTM model.",
"We requested the authors of \\cite{yao2017automated} for access to their LSTM model or a fake review dataset generated by their model. Unfortunately they were not able to share either of these with us. We therefore replicated their model as closely as we could, based on their paper and e-mail correspondence\\footnote{We are committed to sharing our code with bonafide researchers for the sake of reproducibility.}.",
" ",
"We used the same graphics card (GeForce GTX) and trained using the same framework (torch-RNN in lua). We downloaded the reviews from Yelp Challenge and preprocessed the data to only contain printable ASCII characters, and filtered out non-restaurant reviews. We trained the model for approximately 72 hours. We post-processed the reviews using the customization methodology described in \\cite{yao2017automated} and email correspondence. We call fake reviews generated by this model LSTM-Fake reviews.",
" ",
"\\subsection{Similarity to existing fake reviews}",
"\\label{sec:automated}",
" ",
"We now want to understand how NMT-Fake* reviews compare to a) LSTM fake reviews and b) human-generated fake reviews. We do this by comparing the statistical similarity between these classes.",
" ",
"For `a' (Figure~\\ref{fig:lstm}), we use the Yelp Challenge dataset. We trained a classifier using 5,000 random reviews from the Yelp Challenge dataset (``human'') and 5,000 fake reviews generated by LSTM-Fake. Yao et al. \\cite{yao2017automated} found that character features are essential in identifying LSTM-Fake reviews. Consequently, we use character features (n-grams up to 3).",
" ",
"For `b' (Figure~\\ref{fig:shill}),we the ``Yelp Shills'' dataset (combination of YelpZip \\cite{mukherjee2013yelp}, YelpNYC \\cite{mukherjee2013yelp}, YelpChi \\cite{rayana2015collective}). This dataset labels entries that are identified as fraudulent by Yelp's filtering mechanism (''shill reviews'')\\footnote{Note that shill reviews are probably generated by human shills \\cite{zhao2017news}.}. The rest are treated as genuine reviews from human users (''genuine''). We use 100,000 reviews from each category to train a classifier. We use features from the commercial psychometric tool LIWC2015 \\cite{pennebaker2015development} to generated features.",
" ",
"In both cases, we use AdaBoost (with 200 shallow decision trees) for training. For testing each classifier, we use a held out test set of 1,000 reviews from both classes in each case. In addition, we test 1,000 NMT-Fake* reviews. Figures~\\ref{fig:lstm} and~\\ref{fig:shill} show the results. The classification threshold of 50\\% is marked with a dashed line.",
" ",
"\\begin{figure}",
" \\begin{subfigure}[b]{0.5\\columnwidth}",
" \\includegraphics[width=\\columnwidth]{figures/lstm.png}",
" \\caption{Human--LSTM reviews.}",
" \\label{fig:lstm}",
" \\end{subfigure}",
" \\begin{subfigure}[b]{0.5\\columnwidth}",
" \\includegraphics[width=\\columnwidth]{figures/distribution_shill.png}",
" \\caption{Genuine--Shill reviews.}",
" \\label{fig:shill}",
" \\end{subfigure}",
" \\caption{",
" Histogram comparison of NMT-Fake* reviews with LSTM-Fake reviews and human-generated (\\emph{genuine} and \\emph{shill}) reviews. Figure~\\ref{fig:lstm} shows that a classifier trained to distinguish ``human'' vs. LSTM-Fake cannot distinguish ``human'' vs NMT-Fake* reviews. Figure~\\ref{fig:shill} shows NMT-Fake* reviews are more similar to \\emph{genuine} reviews than \\emph{shill} reviews.",
" }",
" \\label{fig:statistical_similarity}",
"\\end{figure}",
" ",
"We can see that our new generated reviews do not share strong attributes with previous known categories of fake reviews. If anything, our fake reviews are more similar to genuine reviews than previous fake reviews. We thus conjecture that our NMT-Fake* fake reviews present a category of fake reviews that may go undetected on online review sites.",
" ",
" ",
"\\subsection{Comparative user study}",
"\\label{sec:comparison}",
"We wanted to evaluate the effectiveness of fake reviews againsttech-savvy users who understand and know to expect machine-generated fake reviews. We conducted a user study with 20 participants, all with computer science education and at least one university degree. Participant demographics are shown in Table~\\ref{table:amt_pop} in the Appendix. Each participant first attended a training session where they were asked to label reviews (fake and genuine) and could later compare them to the correct answers -- we call these participants \\emph{experienced participants}.",
"No personal data was collected during the user study.",
" ",
"Each person was given two randomly selected sets of 30 of reviews (a total of 60 reviews per person) with reviews containing 10 \\textendash 50 words each.",
"Each set contained 26 (87\\%) real reviews from Yelp and 4 (13\\%) machine-generated reviews,",
"numbers chosen based on suspicious review prevalence on Yelp~\\cite{mukherjee2013yelp,rayana2015collective}.",
"One set contained machine-generated reviews from one of the two models (NMT ($b=0.3, \\lambda=-5$) or LSTM),",
"and the other set reviews from the other in randomized order. The number of fake reviews was revealed to each participant in the study description. Each participant was requested to mark four (4) reviews as fake.",
" ",
"Each review targeted a real restaurant. A screenshot of that restaurant's Yelp page was shown to each participant prior to the study. Each participant evaluated reviews for one specific, randomly selected, restaurant. An example of the first page of the user study is shown in Figure~\\ref{fig:screenshot} in Appendix.",
" ",
"\\begin{figure}[!ht]",
"\\centering",
"\\includegraphics[width=.7\\columnwidth]{detection2.png}",
"\\caption{Violin plots of detection rate in comparative study. Mean and standard deviations for number of detected fakes are $0.8\\pm0.7$ for NMT-Fake* and $2.5\\pm1.0$ for LSTM-Fake. $n=20$. A sample of random detection is shown as comparison.}",
"\\label{fig:aalto}",
"\\end{figure}",
" ",
" ",
"Figure~\\ref{fig:aalto} shows the distribution of detected reviews of both types. A hypothetical random detector is shown for comparison.",
"NMT-Fake* reviews are significantly more difficult to detect for our experienced participants. In average, detection rate (recall) is $20\\%$ for NMT-Fake* reviews, compared to $61\\%$ for LSTM-based reviews.",
"The precision (and F-score) is the same as the recall in our study, since participants labeled 4 fakes in each set of 30 reviews \\cite{murphy2012machine}.",
"The distribution of the detection across participants is shown in Figure~\\ref{fig:aalto}. \\emph{The difference is statistically significant with confidence level $99\\%$} (Welch's t-test).",
"We compared the detection rate of NMT-Fake* reviews to a random detector, and find that \\emph{our participants detection rate of NMT-Fake* reviews is not statistically different from random predictions with 95\\% confidence level} (Welch's t-test).",
" ",
" ",
"\\section{Defenses}",
" ",
"\\label{sec:detection}",
" ",
"We developed an AdaBoost-based classifier to detect our new fake reviews, consisting of 200 shallow decision trees (depth 2). The features we used are recorded in Table~\\ref{table:features_adaboost} (Appendix).",
"We used word-level features based on spaCy-tokenization \\cite{honnibal-johnson:2015:EMNLP} and constructed n-gram representation of POS-tags and dependency tree tags. We added readability features from NLTK~\\cite{bird2004nltk}.",
" ",
"\\begin{figure}[ht]",
"\\centering",
"\\includegraphics[width=.7\\columnwidth]{obf_score_fair_2.png}",
"\\caption{",
"Adaboost-based classification of NMT-Fake and human-written reviews.",
"Effect of varying $b$ and $\\lambda$ in fake review generation.",
"The variant native speakers had most difficulties detecting is well detectable by AdaBoost (97\\%).}",
"\\label{fig:adaboost_matrix_b_lambda}",
"\\end{figure}",
" ",
" ",
"Figure~\\ref{fig:adaboost_matrix_b_lambda} shows our AdaBoost classifier's class-averaged F-score at detecting different kind of fake reviews. The classifier is very effective in detecting reviews that humans have difficulties detecting. For example, the fake reviews MTurk users had most difficulty detecting ($b=0.3, \\lambda=-5$) are detected with an excellent 97\\% F-score.",
"The most important features for the classification were counts for frequently occurring words in fake reviews (such as punctuation, pronouns, articles) as well as the readability feature ``Automated Readability Index''. We thus conclude that while NMT-Fake reviews are difficult to detect for humans, they can be well detected with the right tools.",
" ",
"\\section{Related Work}",
" ",
"Kumar and Shah~\\cite{kumar2018false} survey and categorize false information research. Automatically generated fake reviews are a form of \\emph{opinion-based false information}, where the creator of the review may influence reader's opinions or decisions.",
"Yao et al. \\cite{yao2017automated} presented their study on machine-generated fake reviews. Contrary to us, they investigated character-level language models, without specifying a specific context before generation. We leverage existing NMT tools to encode a specific context to the restaurant before generating reviews.",
"Supporting our study, Everett et al~\\cite{Everett2016Automated} found that security researchers were less likely to be fooled by Markov chain-generated Reddit comments compared to ordinary Internet users.",
" ",
"Diversification of NMT model outputs has been studied in \\cite{li2016diversity}. The authors proposed the use of a penalty to commonly occurring sentences (\\emph{n-grams}) in order to emphasize maximum mutual information-based generation.",
"The authors investigated the use of NMT models in chatbot systems.",
"We found that unigram penalties to random tokens (Algorithm~\\ref{alg:aug}) was easy to implement and produced sufficiently diverse responses.",
" ",
"\\section {Discussion and Future Work}",
" ",
"\\paragraph{What makes NMT-Fake* reviews difficult to detect?} First, NMT models allow the encoding of a relevant context for each review, which narrows down the possible choices of words that the model has to choose from. Our NMT model had a perplexity of approximately $25$, while the model of \\cite{yao2017automated} had a perplexity of approximately $90$ \\footnote{Personal communication with the authors}. Second, the beam search in NMT models narrows down choices to natural-looking sentences. Third, we observed that the NMT model produced \\emph{better structure} in the generated sentences (i.e. a more coherent story).",
" ",
"\\paragraph{Cost of generating reviews} With our setup, generating one review took less than one second. The cost of generation stems mainly from the overnight training. Assuming an electricity cost of 16 cents / kWh (California) and 8 hours of training, training the NMT model requires approximately 1.30 USD. This is a 90\\% reduction in time compared to the state-of-the-art \\cite{yao2017automated}. Furthermore, it is possible to generate both positive and negative reviews with the same model.",
" ",
"\\paragraph{Ease of customization} We experimented with inserting specific words into the text by increasing their log likelihoods in the beam search. We noticed that the success depended on the prevalence of the word in the training set. For example, adding a +5 to \\emph{Mike} in the log-likelihood resulted in approximately 10\\% prevalence of this word in the reviews. An attacker can therefore easily insert specific keywords to reviews, which can increase evasion probability.",
" ",
"\\paragraph{Ease of testing} Our diversification scheme is applicable during \\emph{generation phase}, and does not affect the training setup of the network in any way. Once the NMT model is obtained, it is easy to obtain several different variants of NMT-Fake reviews by varying parameters $b$ and $\\lambda$.",
" ",
" ",
" ",
"\\paragraph{Languages} The generation methodology is not per-se language-dependent. The requirement for successful generation is that sufficiently much data exists in the targeted language. However, our language model modifications require some knowledge of that target language's grammar to produce high-quality reviews.",
" ",
"\\paragraph{Generalizability of detection techniques} Currently, fake reviews are not universally detectable. Our results highlight that it is difficult to claim detection performance on unseen types of fake reviews (Section~\\ref{sec:automated}). We see this an open problem that deserves more attention in fake reviews research.",
" ",
"\\paragraph{Generalizability to other types of datasets} Our technique can be applied to any dataset, as long as there is sufficient training data for the NMT model. We used approximately 2.9 million reviews for this work.",
" ",
"\\section{Conclusion}",
" ",
"In this paper, we showed that neural machine translation models can be used to generate fake reviews that are very effective in deceiving even experienced, tech-savvy users.",
"This supports anecdotal evidence \\cite{national2017commission}.",
"Our technique is more effective than state-of-the-art \\cite{yao2017automated}.",
"We conclude that machine-aided fake review detection is necessary since human users are ineffective in identifying fake reviews.",
"We also showed that detectors trained using one type of fake reviews are not effective in identifying other types of fake reviews.",
"Robust detection of fake reviews is thus still an open problem.",
" ",
" ",
"\\section*{Acknowledgments}",
"We thank Tommi Gr\\\"{o}ndahl for assistance in planning user studies and the",
"participants of the user study for their time and feedback. We also thank",
"Luiza Sayfullina for comments that improved the manuscript.",
"We thank the authors of \\cite{yao2017automated} for answering questions about",
"their work.",
" ",
" ",
"\\bibliographystyle{splncs}",
"\\begin{thebibliography}{10}",
" ",
"\\bibitem{yao2017automated}",
"Yao, Y., Viswanath, B., Cryan, J., Zheng, H., Zhao, B.Y.:",
"\\newblock Automated crowdturfing attacks and defenses in online review systems.",
"\\newblock In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and",
" Communications Security, ACM (2017)",
" ",
"\\bibitem{murphy2012machine}",
"Murphy, K.:",
"\\newblock Machine learning: a probabilistic approach.",
"\\newblock Massachusetts Institute of Technology (2012)",
" ",
"\\bibitem{challenge2013yelp}",
"Yelp:",
"\\newblock {Yelp Challenge Dataset} (2013)",
" ",
"\\bibitem{mukherjee2013yelp}",
"Mukherjee, A., Venkataraman, V., Liu, B., Glance, N.:",
"\\newblock What yelp fake review filter might be doing?",
"\\newblock In: Seventh International AAAI Conference on Weblogs and Social Media",
" (ICWSM). (2013)",
" ",
"\\bibitem{rayana2015collective}",
"Rayana, S., Akoglu, L.:",
"\\newblock Collective opinion spam detection: Bridging review networks and",
" metadata.",
"\\newblock In: {}Proceedings of the 21th ACM SIGKDD International Conference on",
" Knowledge Discovery and Data Mining",
" ",
"\\bibitem{o2008user}",
"{O'Connor}, P.:",
"\\newblock {User-generated content and travel: A case study on Tripadvisor.com}.",
"\\newblock Information and communication technologies in tourism 2008 (2008)",
" ",
"\\bibitem{luca2010reviews}",
"Luca, M.:",
"\\newblock {Reviews, Reputation, and Revenue: The Case of Yelp. com}.",
"\\newblock {Harvard Business School} (2010)",
" ",
"\\bibitem{wang2012serf}",
"Wang, G., Wilson, C., Zhao, X., Zhu, Y., Mohanlal, M., Zheng, H., Zhao, B.Y.:",
"\\newblock Serf and turf: crowdturfing for fun and profit.",
"\\newblock In: Proceedings of the 21st international conference on World Wide",
" Web (WWW), ACM (2012)",
" ",
"\\bibitem{rinta2017understanding}",
"Rinta-Kahila, T., Soliman, W.:",
"\\newblock Understanding crowdturfing: The different ethical logics behind the",
" clandestine industry of deception.",
"\\newblock In: ECIS 2017: Proceedings of the 25th European Conference on",
" Information Systems. (2017)",
" ",
"\\bibitem{luca2016fake}",
"Luca, M., Zervas, G.:",
"\\newblock Fake it till you make it: Reputation, competition, and yelp review",
" fraud.",
"\\newblock Management Science (2016)",
" ",
"\\bibitem{national2017commission}",
"{National Literacy Trust}:",
"\\newblock Commission on fake news and the teaching of critical literacy skills",
" in schools URL:",
" \\url{https://literacytrust.org.uk/policy-and-campaigns/all-party-parliamentary-group-literacy/fakenews/}.",
" ",
"\\bibitem{jurafsky2014speech}",
"Jurafsky, D., Martin, J.H.:",
"\\newblock Speech and language processing. Volume~3.",
"\\newblock Pearson London: (2014)",
" ",
"\\bibitem{kingma2014adam}",
"Kingma, D.P., Ba, J.:",
"\\newblock Adam: A method for stochastic optimization.",
"\\newblock arXiv preprint arXiv:1412.6980 (2014)",
" ",
"\\bibitem{cho2014learning}",
"Cho, K., van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F.,",
" Schwenk, H., Bengio, Y.:",
"\\newblock Learning phrase representations using rnn encoder--decoder for",
" statistical machine translation.",
"\\newblock In: Proceedings of the 2014 Conference on Empirical Methods in",
" Natural Language Processing (EMNLP). (2014)",
" ",
"\\bibitem{klein2017opennmt}",
"Klein, G., Kim, Y., Deng, Y., Senellart, J., Rush, A.:",
"\\newblock Opennmt: Open-source toolkit for neural machine translation.",
"\\newblock Proceedings of ACL, System Demonstrations (2017)",
" ",
"\\bibitem{wu2016google}",
"Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., Krikun,",
" M., Cao, Y., Gao, Q., Macherey, K., et~al.:",
"\\newblock Google's neural machine translation system: Bridging the gap between",
" human and machine translation.",
"\\newblock arXiv preprint arXiv:1609.08144 (2016)",
" ",
"\\bibitem{mei2017coherent}",
"Mei, H., Bansal, M., Walter, M.R.:",
"\\newblock Coherent dialogue with attention-based language models.",
"\\newblock In: AAAI. (2017) 3252--3258",
" ",
"\\bibitem{li2016diversity}",
"Li, J., Galley, M., Brockett, C., Gao, J., Dolan, B.:",
"\\newblock A diversity-promoting objective function for neural conversation",
" models.",
"\\newblock In: Proceedings of NAACL-HLT. (2016)",
" ",
"\\bibitem{rubin2006assessing}",
"Rubin, V.L., Liddy, E.D.:",
"\\newblock Assessing credibility of weblogs.",
"\\newblock In: AAAI Spring Symposium: Computational Approaches to Analyzing",
" Weblogs. (2006)",
" ",
"\\bibitem{zhao2017news}",
"news.com.au:",
"\\newblock {The potential of AI generated 'crowdturfing' could undermine online",
" reviews and dramatically erode public trust} URL:",
" \\url{http://www.news.com.au/technology/online/security/the-potential-of-ai-generated-crowdturfing-could-undermine-online-reviews-and-dramatically-erode-public-trust/news-story/e1c84ad909b586f8a08238d5f80b6982}.",
" ",
"\\bibitem{pennebaker2015development}",
"Pennebaker, J.W., Boyd, R.L., Jordan, K., Blackburn, K.:",
"\\newblock {The development and psychometric properties of LIWC2015}.",
"\\newblock Technical report (2015)",
" ",
"\\bibitem{honnibal-johnson:2015:EMNLP}",
"Honnibal, M., Johnson, M.:",
"\\newblock An improved non-monotonic transition system for dependency parsing.",
"\\newblock In: Proceedings of the 2015 Conference on Empirical Methods in",
" Natural Language Processing (EMNLP), ACM (2015)",
" ",
"\\bibitem{bird2004nltk}",
"Bird, S., Loper, E.:",
"\\newblock {NLTK: the natural language toolkit}.",
"\\newblock In: Proceedings of the ACL 2004 on Interactive poster and",
" demonstration sessions, Association for Computational Linguistics (2004)",
" ",
"\\bibitem{kumar2018false}",
"Kumar, S., Shah, N.:",
"\\newblock False information on web and social media: A survey.",
"\\newblock arXiv preprint arXiv:1804.08559 (2018)",
" ",
"\\bibitem{Everett2016Automated}",
"Everett, R.M., Nurse, J.R.C., Erola, A.:",
"\\newblock The anatomy of online deception: What makes automated text",
" convincing?",
"\\newblock In: Proceedings of the 31st Annual ACM Symposium on Applied",
" Computing. SAC '16, ACM (2016)",
" ",
"\\end{thebibliography}",
" ",
" ",
" ",
"\\section*{Appendix}",
" ",
"We present basic demographics of our MTurk study and the comparative study with experienced users in Table~\\ref{table:amt_pop}.",
" ",
"\\begin{table}",
"\\caption{User study statistics.}",
"\\begin{center}",
" \\begin{tabular}{ | l | c | c | }",
" \\hline",
" Quality & Mechanical Turk users & Experienced users\\\\",
" \\hline",
" Native English Speaker & Yes (20) & Yes (1) No (19) \\\\",
" Fluent in English & Yes (20) & Yes (20) \\\\",
" Age & 21-40 (17) 41-60 (3) & 21-25 (8) 26-30 (7) 31-35 (4) 41-45 (1)\\\\",
" Gender & Male (14) Female (6) & Male (17) Female (3)\\\\",
" Highest Education & High School (10) Bachelor (10) & Bachelor (9) Master (6) Ph.D. (5) \\\\",
" \\hline",
" \\end{tabular}",
" \\label{table:amt_pop}",
"\\end{center}",
"\\end{table}",
" ",
" ",
"Table~\\ref{table:openNMT-py_commands} shows a listing of the openNMT-py commands we used to create our NMT model and to generate fake reviews.",
" ",
"\\begin{table}[t]",
"\\caption{Listing of used openNMT-py commands.}",
"\\begin{center}",
" \\begin{tabular}{ | l | l | }",
" \\hline",
" Phase & Bash command \\\\",
" \\hline",
" Preprocessing & \\begin{lstlisting}[language=bash]",
"python preprocess.py -train_src context-train.txt",
"-train_tgt reviews-train.txt -valid_src context-val.txt",
"-valid_tgt reviews-val.txt -save_data model",
"-lower -tgt_words_min_frequency 10",
"\\end{lstlisting}",
" \\\\ & \\\\",
" Training & \\begin{lstlisting}[language=bash]",
"python train.py -data model -save_model model -epochs 8",
"-gpuid 0 -learning_rate_decay 0.5 -optim adam",
"-learning_rate 0.001 -start_decay_at 3\\end{lstlisting}",
" \\\\ & \\\\",
" Generation & \\begin{lstlisting}[language=bash]",
"python translate.py -model model_acc_35.54_ppl_25.68_e8.pt",
"-src context-tst.txt -output pred-e8.txt -replace_unk",
"-verbose -max_length 50 -gpu 0",
" \\end{lstlisting} \\\\",
" \\hline",
" \\end{tabular}",
" \\label{table:openNMT-py_commands}",
"\\end{center}",
"\\end{table}",
" ",
" ",
"Table~\\ref{table:MTurk_sub} shows the classification performance of Amazon Mechanical Turkers, separated across different categories of NMT-Fake reviews. The category with best performance ($b=0.3, \\lambda=-5$) is denoted as NMT-Fake*.",
" ",
"\\begin{table}[b]",
"\\caption{MTurk study subclass classification reports. Classes are imbalanced in ratio 1:6. Random predictions are $p_\\mathrm{human} = 86\\%$ and $p_\\mathrm{machine} = 14\\%$, with $r_\\mathrm{human} = r_\\mathrm{machine} = 50\\%$. Class-averaged F-scores for random predictions are $42\\%$.}",
"\\begin{center}",
" \\begin{tabular}{ | c || c |c |c | c | }",
" \\hline",
" $(b=0.3, \\lambda = -3)$ & Precision & Recall & F-score & Support \\\\ \\hline",
" Human & 89\\% & 63\\% & 73\\% & 994\\\\",
" NMT-Fake & 15\\% & 45\\% & 22\\% & 146 \\\\",
" \\hline",
" \\hline",
" $(b=0.3, \\lambda = -5)$ & Precision & Recall & F-score & Support \\\\ \\hline",
" Human & 86\\% & 63\\% & 73\\% & 994\\\\",
" NMT-Fake* & 16\\% & 40\\% & 23\\% & 171 \\\\",
" \\hline",
" \\hline",
" $(b=0.5, \\lambda = -4)$ & Precision & Recall & F-score & Support \\\\ \\hline",
" Human & 88\\% & 63\\% & 73\\% & 994\\\\",
" NMT-Fake & 21\\% & 55\\% & 30\\% & 181 \\\\",
" \\hline",
" \\hline",
" $(b=0.7, \\lambda = -3)$ & Precision & Recall & F-score & Support \\\\ \\hline",
" Human & 88\\% & 63\\% & 73\\% & 994\\\\",
" NMT-Fake & 19\\% & 50\\% & 27\\% & 170 \\\\",
" \\hline",
" \\hline",
" $(b=0.7, \\lambda = -5)$ & Precision & Recall & F-score & Support \\\\ \\hline",
" Human & 89\\% & 63\\% & 74\\% & 994\\\\",
" NMT-Fake & 21\\% & 57\\% & 31\\% & 174 \\\\",
" \\hline",
" \\hline",
" $(b=0.9, \\lambda = -4)$ & Precision & Recall & F-score & Support \\\\ \\hline",
" Human & 88\\% & 63\\% & 73\\% & 994\\\\",
" NMT-Fake & 18\\% & 50\\% & 27\\% & 164 \\\\",
" \\hline",
" \\end{tabular}",
" \\label{table:MTurk_sub}",
"\\end{center}",
"\\end{table}",
" ",
"Figure~\\ref{fig:screenshot} shows screenshots of the first two pages of our user study with experienced participants.",
" ",
"\\begin{figure}[ht]",
"\\centering",
"\\includegraphics[width=1.\\columnwidth]{figures/screenshot_7-3.png}",
"\\caption{",
"Screenshots of the first two pages in the user study. Example 1 is a NMT-Fake* review, the rest are human-written.",
"}",
"\\label{fig:screenshot}",
"\\end{figure}",
" ",
"Table~\\ref{table:features_adaboost} shows the features used to detect NMT-Fake reviews using the AdaBoost classifier.",
" ",
"\\begin{table}",
"\\caption{Features used in NMT-Fake review detector.}",
"\\begin{center}",
" \\begin{tabular}{ | l | c | }",
" \\hline",
" Feature type & Number of features \\\\ \\hline",
" \\hline",
" Readability features & 13 \\\\ \\hline",
" Unique POS tags & $~20$ \\\\ \\hline",
" Word unigrams & 22,831 \\\\ \\hline",
" 1/2/3/4-grams of simple part-of-speech tags & 54,240 \\\\ \\hline",
" 1/2/3-grams of detailed part-of-speech tags & 112,944 \\\\ \\hline",
" 1/2/3-grams of syntactic dependency tags & 93,195 \\\\ \\hline",
" \\end{tabular}",
" \\label{table:features_adaboost}",
"\\end{center}",
"\\end{table}",
" ",
"\\end{document}",
""
]
]
} | {
"question": [
"Which dataset do they use a starting point in generating fake reviews?",
"Do they use a pretrained NMT model to help generating reviews?",
"How does using NMT ensure generated reviews stay on topic?",
"What kind of model do they use for detection?",
"Does their detection tool work better than human detection?",
"How many reviews in total (both generated and true) do they evaluate on Amazon Mechanical Turk?"
],
"question_id": [
"1a43df221a567869964ad3b275de30af2ac35598",
"98b11f70239ef0e22511a3ecf6e413ecb726f954",
"d4d771bcb59bab4f3eb9026cda7d182eb582027d",
"12f1919a3e8ca460b931c6cacc268a926399dff4",
"cd1034c183edf630018f47ff70b48d74d2bb1649",
"bd9930a613dd36646e2fc016b6eb21ab34c77621"
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"the Yelp Challenge dataset"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We use the Yelp Challenge dataset BIBREF2 for our fake review generation. The dataset (Aug 2017) contains 2.9 million 1 –5 star restaurant reviews. We treat all reviews as genuine human-written reviews for the purpose of this work, since wide-scale deployment of machine-generated review attacks are not yet reported (Sep 2017) BIBREF19 . As preprocessing, we remove non-printable (non-ASCII) characters and excessive white-space. We separate punctuation from words. We reserve 15,000 reviews for validation and 3,000 for testing, and the rest we use for training. NMT models require a parallel corpus of source and target sentences, i.e. a large set of (source, target)-pairs. We set up a parallel corpus by constructing (context, review)-pairs from the dataset. Next, we describe how we created our input context."
],
"highlighted_evidence": [
"We use the Yelp Challenge dataset BIBREF2 for our fake review generation. "
]
},
{
"unanswerable": false,
"extractive_spans": [
"Yelp Challenge dataset BIBREF2"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We use the Yelp Challenge dataset BIBREF2 for our fake review generation. The dataset (Aug 2017) contains 2.9 million 1 –5 star restaurant reviews. We treat all reviews as genuine human-written reviews for the purpose of this work, since wide-scale deployment of machine-generated review attacks are not yet reported (Sep 2017) BIBREF19 . As preprocessing, we remove non-printable (non-ASCII) characters and excessive white-space. We separate punctuation from words. We reserve 15,000 reviews for validation and 3,000 for testing, and the rest we use for training. NMT models require a parallel corpus of source and target sentences, i.e. a large set of (source, target)-pairs. We set up a parallel corpus by constructing (context, review)-pairs from the dataset. Next, we describe how we created our input context."
],
"highlighted_evidence": [
"We use the Yelp Challenge dataset BIBREF2 for our fake review generation."
]
}
],
"annotation_id": [
"a6c6f62389926ad2d6b21e8b3bbd5ee58e32ccd2",
"c3b77db9a4c6f4e898460912ef2da68a2e55ba57"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"5c25c0877f37421f694b29c367cc344a9ce048c1",
"bbc27527be0e66597f3d157df615c449cd3ce805"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"855e5c21e865eb289ae9bfd97d81665ebd1f1e0f"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"AdaBoost-based classifier"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We developed an AdaBoost-based classifier to detect our new fake reviews, consisting of 200 shallow decision trees (depth 2). The features we used are recorded in Table~\\ref{table:features_adaboost} (Appendix)."
],
"highlighted_evidence": [
"We developed an AdaBoost-based classifier to detect our new fake reviews, consisting of 200 shallow decision trees (depth 2)."
]
}
],
"annotation_id": [
"72e4b5a0cedcbc980f7040caf4347c1079a5c474"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"We noticed some variation in the detection of different fake review categories. The respondents in our MTurk survey had most difficulties recognizing reviews of category $(b=0.3, \\lambda=-5)$, where true positive rate was $40.4\\%$, while the true negative rate of the real class was $62.7\\%$. The precision were $16\\%$ and $86\\%$, respectively. The class-averaged F-score is $47.6\\%$, which is close to random. Detailed classification reports are shown in Table~\\ref{table:MTurk_sub} in Appendix. Our MTurk-study shows that \\emph{our NMT-Fake reviews pose a significant threat to review systems}, since \\emph{ordinary native English-speakers have very big difficulties in separating real reviews from fake reviews}. We use the review category $(b=0.3, \\lambda=-5)$ for future user tests in this paper, since MTurk participants had most difficulties detecting these reviews. We refer to this category as NMT-Fake* in this paper.",
"Figure~\\ref{fig:adaboost_matrix_b_lambda} shows our AdaBoost classifier's class-averaged F-score at detecting different kind of fake reviews. The classifier is very effective in detecting reviews that humans have difficulties detecting. For example, the fake reviews MTurk users had most difficulty detecting ($b=0.3, \\lambda=-5$) are detected with an excellent 97\\% F-score."
],
"highlighted_evidence": [
"The respondents in our MTurk survey had most difficulties recognizing reviews of category $(b=0.3, \\lambda=-5)$, where true positive rate was $40.4\\%$, while the true negative rate of the real class was $62.7\\%$. The precision were $16\\%$ and $86\\%$, respectively. The class-averaged F-score is $47.6\\%$, which is close to random. Detailed classification reports are shown in Table~\\ref{table:MTurk_sub} in Appendix. Our MTurk-study shows that \\emph{our NMT-Fake reviews pose a significant threat to review systems}, since \\emph{ordinary native English-speakers have very big difficulties in separating real reviews from fake reviews}. We use the review category $(b=0.3, \\lambda=-5)$ for future user tests in this paper, since MTurk participants had most difficulties detecting these reviews. We refer to this category as NMT-Fake* in this paper.",
"The classifier is very effective in detecting reviews that humans have difficulties detecting. For example, the fake reviews MTurk users had most difficulty detecting ($b=0.3, \\lambda=-5$) are detected with an excellent 97\\% F-score."
]
}
],
"annotation_id": [
"06f2c923d36116318aab2f7cb82d418020654d74"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"1,006 fake reviews and 994 real reviews"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We first investigated overall detection of any NMT-Fake reviews (1,006 fake reviews and 994 real reviews). We found that the participants had big difficulties in detecting our fake reviews. In average, the reviews were detected with class-averaged \\emph{F-score of only 56\\%}, with 53\\% F-score for fake review detection and 59\\% F-score for real review detection. The results are very close to \\emph{random detection}, where precision, recall and F-score would each be 50\\%. Results are recorded in Table~\\ref{table:MTurk_super}. Overall, the fake review generation is very successful, since human detection rate across categories is close to random."
],
"highlighted_evidence": [
"We first investigated overall detection of any NMT-Fake reviews (1,006 fake reviews and 994 real reviews)."
]
}
],
"annotation_id": [
"1249dad8a00c798589671ed2271454f6871fadad"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Fig. 1: Näıve text generation with NMT vs. generation using our NTM model. Repetitive patterns are underlined. Contextual words are italicized. Both examples here are generated based on the context given in Example 1.",
"Table 1: Six different parametrizations of our NMT reviews and one example for each. The context is “5 P . F . Chang ’ s Scottsdale AZ” in all examples.",
"Table 2: Effectiveness of Mechanical Turkers in distinguishing human-written reviews from fake reviews generated by our NMT model (all variants).",
"Fig. 2: Histogram comparison of NMT-Fake* reviews with LSTM-Fake reviews and human-generated (genuine and shill) reviews. Figure 2a shows that a classifier trained to distinguish “human” vs. LSTM-Fake cannot distinguish “human” vs NMT-Fake* reviews. Figure 2b shows NMT-Fake* reviews are more similar to genuine reviews than shill reviews.",
"Fig. 3: Violin plots of detection rate in comparative study. Mean and standard deviations for number of detected fakes are 0.8±0.7 for NMT-Fake* and 2.5±1.0 for LSTM-Fake. n = 20. A sample of random detection is shown as comparison.",
"Fig. 4: Adaboost-based classification of NMT-Fake and human-written reviews. Effect of varying b and λ in fake review generation. The variant native speakers had most difficulties detecting is well detectable by AdaBoost (97%).",
"Table 3: User study statistics.",
"Table 4: Listing of used openNMT-py commands.",
"Table 5: MTurk study subclass classification reports. Classes are imbalanced in ratio 1:6. Random predictions are phuman = 86% and pmachine = 14%, with rhuman = rmachine = 50%. Class-averaged F-scores for random predictions are 42%.",
"Fig. 5: Screenshots of the first two pages in the user study. Example 1 is a NMTFake* review, the rest are human-written.",
"Table 6: Features used in NMT-Fake review detector."
],
"file": [
"8-Figure1-1.png",
"11-Table1-1.png",
"12-Table2-1.png",
"14-Figure2-1.png",
"15-Figure3-1.png",
"16-Figure4-1.png",
"19-Table3-1.png",
"20-Table4-1.png",
"20-Table5-1.png",
"21-Figure5-1.png",
"21-Table6-1.png"
]
} |
1907.05664 | Saliency Maps Generation for Automatic Text Summarization | Saliency map generation techniques are at the forefront of explainable AI literature for a broad range of machine learning applications. Our goal is to question the limits of these approaches on more complex tasks. In this paper we apply Layer-Wise Relevance Propagation (LRP) to a sequence-to-sequence attention model trained on a text summarization dataset. We obtain unexpected saliency maps and discuss the rightfulness of these"explanations". We argue that we need a quantitative way of testing the counterfactual case to judge the truthfulness of the saliency maps. We suggest a protocol to check the validity of the importance attributed to the input and show that the saliency maps obtained sometimes capture the real use of the input features by the network, and sometimes do not. We use this example to discuss how careful we need to be when accepting them as explanation. | {
"section_name": [
"Introduction",
"The Task and the Model",
"Dataset and Training Task",
"The Model",
"Obtained Summaries",
"Layer-Wise Relevance Propagation",
"Mathematical Description",
"Generation of the Saliency Maps",
"Experimental results",
"First Observations",
"Validating the Attributions",
"Conclusion"
],
"paragraphs": [
[
"Ever since the LIME algorithm BIBREF0 , \"explanation\" techniques focusing on finding the importance of input features in regard of a specific prediction have soared and we now have many ways of finding saliency maps (also called heat-maps because of the way we like to visualize them). We are interested in this paper by the use of such a technique in an extreme task that highlights questions about the validity and evaluation of the approach. We would like to first set the vocabulary we will use. We agree that saliency maps are not explanations in themselves and that they are more similar to attribution, which is only one part of the human explanation process BIBREF1 . We will prefer to call this importance mapping of the input an attribution rather than an explanation. We will talk about the importance of the input relevance score in regard to the model's computation and not make allusion to any human understanding of the model as a result.",
"There exist multiple ways to generate saliency maps over the input for non-linear classifiers BIBREF2 , BIBREF3 , BIBREF4 . We refer the reader to BIBREF5 for a survey of explainable AI in general. We use in this paper Layer-Wise Relevance Propagation (LRP) BIBREF2 which aims at redistributing the value of the classifying function on the input to obtain the importance attribution. It was first created to “explain\" the classification of neural networks on image recognition tasks. It was later successfully applied to text using convolutional neural networks (CNN) BIBREF6 and then Long-Short Term Memory (LSTM) networks for sentiment analysis BIBREF7 .",
"Our goal in this paper is to test the limits of the use of such a technique for more complex tasks, where the notion of input importance might not be as simple as in topic classification or sentiment analysis. We changed from a classification task to a generative task and chose a more complex one than text translation (in which we can easily find a word to word correspondence/importance between input and output). We chose text summarization. We consider abstractive and informative text summarization, meaning that we write a summary “in our own words\" and retain the important information of the original text. We refer the reader to BIBREF8 for more details on the task and the different variants that exist. Since the success of deep sequence-to-sequence models for text translation BIBREF9 , the same approaches have been applied to text summarization tasks BIBREF10 , BIBREF11 , BIBREF12 which use architectures on which we can apply LRP.",
"We obtain one saliency map for each word in the generated summaries, supposed to represent the use of the input features for each element of the output sequence. We observe that all the saliency maps for a text are nearly identical and decorrelated with the attention distribution. We propose a way to check their validity by creating what could be seen as a counterfactual experiment from a synthesis of the saliency maps, using the same technique as in Arras et al. Arras2017. We show that in some but not all cases they help identify the important input features and that we need to rigorously check importance attributions before trusting them, regardless of whether or not the mapping “makes sense\" to us. We finally argue that in the process of identifying the important input features, verifying the saliency maps is as important as the generation step, if not more."
],
[
"We present in this section the baseline model from See et al. See2017 trained on the CNN/Daily Mail dataset. We reproduce the results from See et al. See2017 to then apply LRP on it."
],
[
"The CNN/Daily mail dataset BIBREF12 is a text summarization dataset adapted from the Deepmind question-answering dataset BIBREF13 . It contains around three hundred thousand news articles coupled with summaries of about three sentences. These summaries are in fact “highlights\" of the articles provided by the media themselves. Articles have an average length of 780 words and the summaries of 50 words. We had 287 000 training pairs and 11 500 test pairs. Similarly to See et al. See2017, we limit during training and prediction the input text to 400 words and generate summaries of 200 words. We pad the shorter texts using an UNKNOWN token and truncate the longer texts. We embed the texts and summaries using a vocabulary of size 50 000, thus recreating the same parameters as See et al. See2017."
],
[
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
],
[
"We train the 21 350 992 parameters of the network for about 60 epochs until we achieve results that are qualitatively equivalent to the results of See et al. See2017. We obtain summaries that are broadly relevant to the text but do not match the target summaries very well. We observe the same problems such as wrong reproduction of factual details, replacing rare words with more common alternatives or repeating non-sense after the third sentence. We can see in Figure 1 an example of summary obtained compared to the target one.",
"The “summaries\" we generate are far from being valid summaries of the information in the texts but are sufficient to look at the attribution that LRP will give us. They pick up the general subject of the original text."
],
[
"We present in this section the Layer-Wise Relevance Propagation (LRP) BIBREF2 technique that we used to attribute importance to the input features, together with how we adapted it to our model and how we generated the saliency maps. LRP redistributes the output of the model from the output layer to the input by transmitting information backwards through the layers. We call this propagated backwards importance the relevance. LRP has the particularity to attribute negative and positive relevance: a positive relevance is supposed to represent evidence that led to the classifier's result while negative relevance represents evidence that participated negatively in the prediction."
],
[
"We initialize the relevance of the output layer to the value of the predicted class before softmax and we then describe locally the propagation backwards of the relevance from layer to layer. For normal neural network layers we use the form of LRP with epsilon stabilizer BIBREF2 . We write down $R_{i\\leftarrow j}^{(l, l+1)}$ the relevance received by the neuron $i$ of layer $l$ from the neuron $j$ of layer $l+1$ : ",
"$$\\begin{split}\n\nR_{i\\leftarrow j}^{(l, l+1)} &= \\dfrac{w_{i\\rightarrow j}^{l,l+1}\\textbf {z}^l_i + \\dfrac{\\epsilon \\textrm { sign}(\\textbf {z}^{l+1}_j) + \\textbf {b}^{l+1}_j}{D_l}}{\\textbf {z}^{l+1}_j + \\epsilon * \\textrm { sign}(\\textbf {z}^{l+1}_j)} * R_j^{l+1} \\\\\n\\end{split}$$ (Eq. 7) ",
"where $w_{i\\rightarrow j}^{l,l+1}$ is the network's weight parameter set during training, $\\textbf {b}^{l+1}_j$ is the bias for neuron $j$ of layer $l+1$ , $\\textbf {z}^{l}_i$ is the activation of neuron $i$ on layer $l$ , $\\epsilon $ is the stabilizing term set to 0.00001 and $D_l$ is the dimension of the $l$ -th layer.",
"The relevance of a neuron is then computed as the sum of the relevance he received from the above layer(s).",
"For LSTM cells we use the method from Arras et al.Arras2017 to solve the problem posed by the element-wise multiplications of vectors. Arras et al. noted that when such computation happened inside an LSTM cell, it always involved a “gate\" vector and another vector containing information. The gate vector containing only value between 0 and 1 is essentially filtering the second vector to allow the passing of “relevant\" information. Considering this, when we propagate relevance through an element-wise multiplication operation, we give all the upper-layer's relevance to the “information\" vector and none to the “gate\" vector."
],
[
"We use the same method to transmit relevance through the attention mechanism back to the encoder because Bahdanau's attention BIBREF9 uses element-wise multiplications as well. We depict in Figure 2 the transmission end-to-end from the output layer to the input through the decoder, attention mechanism and then the bidirectional encoder. We then sum up the relevance on the word embedding to get the token's relevance as Arras et al. Arras2017.",
"The way we generate saliency maps differs a bit from the usual context in which LRP is used as we essentially don't have one classification, but 200 (one for each word in the summary). We generate a relevance attribution for the 50 first words of the generated summary as after this point they often repeat themselves.",
"This means that for each text we obtain 50 different saliency maps, each one supposed to represent the relevance of the input for a specific generated word in the summary."
],
[
"In this section, we present our results from extracting attributions from the sequence-to-sequence model trained for abstractive text summarization. We first have to discuss the difference between the 50 different saliency maps we obtain and then we propose a protocol to validate the mappings."
],
[
"The first observation that is made is that for one text, the 50 saliency maps are almost identical. Indeed each mapping highlights mainly the same input words with only slight variations of importance. We can see in Figure 3 an example of two nearly identical attributions for two distant and unrelated words of the summary. The saliency map generated using LRP is also uncorrelated with the attention distribution that participated in the generation of the output word. The attention distribution changes drastically between the words in the generated summary while not impacting significantly the attribution over the input text. We deleted in an experiment the relevance propagated through the attention mechanism to the encoder and didn't observe much changes in the saliency map.",
"It can be seen as evidence that using the attention distribution as an “explanation\" of the prediction can be misleading. It is not the only information received by the decoder and the importance it “allocates\" to this attention state might be very low. What seems to happen in this application is that most of the information used is transmitted from the encoder to the decoder and the attention mechanism at each decoding step just changes marginally how it is used. Quantifying the difference between attention distribution and saliency map across multiple tasks is a possible future work.",
"The second observation we can make is that the saliency map doesn't seem to highlight the right things in the input for the summary it generates. The saliency maps on Figure 3 correspond to the summary from Figure 1 , and we don't see the word “video\" highlighted in the input text, which seems to be important for the output.",
"This allows us to question how good the saliency maps are in the sense that we question how well they actually represent the network's use of the input features. We will call that truthfulness of the attribution in regard to the computation, meaning that an attribution is truthful in regard to the computation if it actually highlights the important input features that the network attended to during prediction. We proceed to measure the truthfulness of the attributions by validating them quantitatively."
],
[
"We propose to validate the saliency maps in a similar way as Arras et al. Arras2017 by incrementally deleting “important\" words from the input text and observe the change in the resulting generated summaries.",
"We first define what “important\" (and “unimportant\") input words mean across the 50 saliency maps per texts. Relevance transmitted by LRP being positive or negative, we average the absolute value of the relevance across the saliency maps to obtain one ranking of the most “relevant\" words. The idea is that input words with negative relevance have an impact on the resulting generated word, even if it is not participating positively, while a word with a relevance close to zero should not be important at all. We did however also try with different methods, like averaging the raw relevance or averaging a scaled absolute value where negative relevance is scaled down by a constant factor. The absolute value average seemed to deliver the best results.",
"We delete incrementally the important words (words with the highest average) in the input and compared it to the control experiment that consists of deleting the least important word and compare the degradation of the resulting summaries. We obtain mitigated results: for some texts, we observe a quick degradation when deleting important words which are not observed when deleting unimportant words (see Figure 4 ), but for other test examples we don't observe a significant difference between the two settings (see Figure 5 ).",
"One might argue that the second summary in Figure 5 is better than the first one as it makes better sentences but as the model generates inaccurate summaries, we do not wish to make such a statement.",
"This however allows us to say that the attribution generated for the text at the origin of the summaries in Figure 4 are truthful in regard to the network's computation and we may use it for further studies of the example, whereas for the text at the origin of Figure 5 we shouldn't draw any further conclusions from the attribution generated.",
"One interesting point is that one saliency map didn't look “better\" than the other, meaning that there is no apparent way of determining their truthfulness in regard of the computation without doing a quantitative validation. This brings us to believe that even in simpler tasks, the saliency maps might make sense to us (for example highlighting the animal in an image classification task), without actually representing what the network really attended too, or in what way.",
"We defined without saying it the counterfactual case in our experiment: “Would the important words in the input be deleted, we would have a different summary\". Such counterfactuals are however more difficult to define for image classification for example, where it could be applying a mask over an image, or just filtering a colour or a pattern. We believe that defining a counterfactual and testing it allows us to measure and evaluate the truthfulness of the attributions and thus weight how much we can trust them."
],
[
"In this work, we have implemented and applied LRP to a sequence-to-sequence model trained on a more complex task than usual: text summarization. We used previous work to solve the difficulties posed by LRP in LSTM cells and adapted the same technique for Bahdanau et al. Bahdanau2014 attention mechanism.",
"We observed a peculiar behaviour of the saliency maps for the words in the output summary: they are almost all identical and seem uncorrelated with the attention distribution. We then proceeded to validate our attributions by averaging the absolute value of the relevance across the saliency maps. We obtain a ranking of the word from the most important to the least important and proceeded to delete one or another.",
"We showed that in some cases the saliency maps are truthful to the network's computation, meaning that they do highlight the input features that the network focused on. But we also showed that in some cases the saliency maps seem to not capture the important input features. This brought us to discuss the fact that these attributions are not sufficient by themselves, and that we need to define the counter-factual case and test it to measure how truthful the saliency maps are.",
"Future work would look into the saliency maps generated by applying LRP to pointer-generator networks and compare to our current results as well as mathematically justifying the average that we did when validating our saliency maps. Some additional work is also needed on the validation of the saliency maps with counterfactual tests. The exploitation and evaluation of saliency map are a very important step and should not be overlooked."
]
]
} | {
"question": [
"Which baselines did they compare?",
"How many attention layers are there in their model?",
"Is the explanation from saliency map correct?"
],
"question_id": [
"6e2ad9ad88cceabb6977222f5e090ece36aa84ea",
"aacb0b97aed6fc6a8b471b8c2e5c4ddb60988bf5",
"710c1f8d4c137c8dad9972f5ceacdbf8004db208"
],
"nlp_background": [
"five",
"five",
"five"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"saliency",
"saliency",
"saliency"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We present in this section the baseline model from See et al. See2017 trained on the CNN/Daily Mail dataset. We reproduce the results from See et al. See2017 to then apply LRP on it.",
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
],
"highlighted_evidence": [
"We present in this section the baseline model from See et al. See2017 trained on the CNN/Daily Mail dataset.",
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
]
},
{
"unanswerable": false,
"extractive_spans": [
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
],
"highlighted_evidence": [
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
]
}
],
"annotation_id": [
"0850b7c0555801d057062480de6bb88adb81cae3",
"93216bca45711b73083372495d9a2667736fbac9"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"101dbdd2108b3e676061cb693826f0959b47891b"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "one",
"evidence": [
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
],
"highlighted_evidence": [
"The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. "
]
}
],
"annotation_id": [
"e0ca6b95c1c051723007955ce6804bd29f325379"
],
"worker_id": [
"101dbdd2108b3e676061cb693826f0959b47891b"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"We showed that in some cases the saliency maps are truthful to the network's computation, meaning that they do highlight the input features that the network focused on. But we also showed that in some cases the saliency maps seem to not capture the important input features. This brought us to discuss the fact that these attributions are not sufficient by themselves, and that we need to define the counter-factual case and test it to measure how truthful the saliency maps are.",
"The second observation we can make is that the saliency map doesn't seem to highlight the right things in the input for the summary it generates. The saliency maps on Figure 3 correspond to the summary from Figure 1 , and we don't see the word “video\" highlighted in the input text, which seems to be important for the output."
],
"highlighted_evidence": [
"But we also showed that in some cases the saliency maps seem to not capture the important input features. ",
"The second observation we can make is that the saliency map doesn't seem to highlight the right things in the input for the summary it generates"
]
}
],
"annotation_id": [
"79e54a7b9ba9cde5813c3434e64a02d722f13b23"
],
"worker_id": [
"101dbdd2108b3e676061cb693826f0959b47891b"
]
}
]
} | {
"caption": [
"Figure 2: Representation of the propagation of the relevance from the output to the input. It passes through the decoder and attention mechanism for each previous decoding time-step, then is passed onto the encoder which takes into account the relevance transiting in both direction due to the bidirectional nature of the encoding LSTM cell.",
"Figure 3: Left : Saliency map over the truncated input text for the second generated word “the”. Right : Saliency map over the truncated input text for the 25th generated word “investigation”. We see that the difference between the mappings is marginal.",
"Figure 4: Summary from Figure 1 generated after deleting important and unimportant words from the input text. We observe a significant difference in summary degradation between the two experiments, where the decoder just repeats the UNKNOWN token over and over."
],
"file": [
"3-Figure2-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png"
]
} |
1910.14497 | Probabilistic Bias Mitigation in Word Embeddings | It has been shown that word embeddings derived from large corpora tend to incorporate biases present in their training data. Various methods for mitigating these biases have been proposed, but recent work has demonstrated that these methods hide but fail to truly remove the biases, which can still be observed in word nearest-neighbor statistics. In this work we propose a probabilistic view of word embedding bias. We leverage this framework to present a novel method for mitigating bias which relies on probabilistic observations to yield a more robust bias mitigation algorithm. We demonstrate that this method effectively reduces bias according to three separate measures of bias while maintaining embedding quality across various popular benchmark semantic tasks | {
"section_name": [
"Introduction",
"Background ::: Geometric Bias Mitigation",
"Background ::: Geometric Bias Mitigation ::: WEAT",
"Background ::: Geometric Bias Mitigation ::: RIPA",
"Background ::: Geometric Bias Mitigation ::: Neighborhood Metric",
"A Probabilistic Framework for Bias Mitigation",
"A Probabilistic Framework for Bias Mitigation ::: Probabilistic Bias Mitigation",
"A Probabilistic Framework for Bias Mitigation ::: Nearest Neighbor Bias Mitigation",
"Experiments",
"Discussion",
"Discussion ::: Acknowledgements",
"Experiment Notes",
"Professions",
"WEAT Word Sets"
],
"paragraphs": [
[
"Word embeddings, or vector representations of words, are an important component of Natural Language Processing (NLP) models and necessary for many downstream tasks. However, word embeddings, including embeddings commonly deployed for public use, have been shown to exhibit unwanted societal stereotypes and biases, raising concerns about disparate impact on axes of gender, race, ethnicity, and religion BIBREF0, BIBREF1. The impact of this bias has manifested in a range of downstream tasks, ranging from autocomplete suggestions BIBREF2 to advertisement delivery BIBREF3, increasing the likelihood of amplifying harmful biases through the use of these models.",
"The most well-established method thus far for mitigating bias relies on projecting target words onto a bias subspace (such as a gender subspace) and subtracting out the difference between the resulting distances BIBREF0. On the other hand, the most popular metric for measuring bias is the WEAT statistic BIBREF1, which compares the cosine similarities between groups of words. However, WEAT has been recently shown to overestimate bias as a result of implicitly relying on similar frequencies for the target words BIBREF4, and BIBREF5 demonstrated that evidence of bias can still be recovered after geometric bias mitigation by examining the neighborhood of a target word among socially-biased words.",
"In response to this, we propose an alternative framework for bias mitigation in word embeddings that approaches this problem from a probabilistic perspective. The motivation for this approach is two-fold. First, most popular word embedding algorithms are probabilistic at their core – i.e., they are trained (explicitly or implicitly BIBREF6) to minimize some form of word co-occurrence probabilities. Thus, we argue that a framework for measuring and treating bias in these embeddings should take into account, in addition to their geometric aspect, their probabilistic nature too. On the other hand, the issue of bias has also been approached (albeit in different contexts) in the fairness literature, where various intuitive notions of equity such as equalized odds have been formalized through probabilistic criteria. By considering analogous criteria for the word embedding setting, we seek to draw connections between these two bodies of work.",
"We present experiments on various bias mitigation benchmarks and show that our framework is comparable to state-of-the-art alternatives according to measures of geometric bias mitigation and that it performs far better according to measures of neighborhood bias. For fair comparison, we focus on mitigating a binary gender bias in pre-trained word embeddings using SGNS (skip-gram with negative-sampling), though we note that this framework and methods could be extended to other types of bias and word embedding algorithms."
],
[
"Geometric bias mitigation uses the cosine distances between words to both measure and remove gender bias BIBREF0. This method implicitly defines bias as a geometric asymmetry between words when projected onto a subspace, such as the gender subspace constructed from a set of gender pairs such as $\\mathcal {P} = \\lbrace (he,she),(man,woman),(king,queen)...\\rbrace $. The projection of a vector $v$ onto $B$ (the subspace) is defined by $v_B = \\sum _{j=1}^{k} (v \\cdot b_j) b_j$ where a subspace $B$ is defined by k orthogonal unit vectors $B = {b_1,...,b_k}$."
],
[
"The WEAT statistic BIBREF1 demonstrates the presence of biases in word embeddings with an effect size defined as the mean test statistic across the two word sets:",
"Where $s$, the test statistic, is defined as: $s(w,A,B) = mean_{a \\in A} cos(w,a) - mean_{b \\in B} cos(w,a)$, and $X$,$Y$,$A$, and $B$ are groups of words for which the association is measured. Possible values range from $-2$ to 2 depending on the association of the words groups, and a value of zero indicates $X$ and $Y$ are equally associated with $A$ and $B$. See BIBREF4 for further details on WEAT."
],
[
"The RIPA (relational inner product association) metric was developed as an alternative to WEAT, with the critique that WEAT is likely to overestimate the bias of a target attribute BIBREF4. The RIPA metric formalizes the measure of bias used in geometric bias mitigation as the inner product association of a word vector $v$ with respect to a relation vector $b$. The relation vector is constructed from the first principal component of the differences between gender word pairs. We report the absolute value of the RIPA metric as the value can be positive or negative according to the direction of the bias. A value of zero indicates a lack of bias, and the value is bound by $[-||w||,||w||]$."
],
[
"The neighborhood bias metric proposed by BIBREF5 quantifies bias as the proportion of male socially-biased words among the $k$ nearest socially-biased male and female neighboring words, whereby biased words are obtained by projecting neutral words onto a gender relation vector. As we only examine the target word among the 1000 most socially-biased words in the vocabulary (500 male and 500 female), a word’s bias is measured as the ratio of its neighborhood of socially-biased male and socially-biased female words, so that a value of 0.5 in this metric would indicate a perfectly unbiased word, and values closer to 0 and 1 indicate stronger bias."
],
[
"Our objective here is to extend and complement the geometric notions of word embedding bias described in the previous section with an alternative, probabilistic, approach. Intuitively, we seek a notion of equality akin to that of demographic parity in the fairness literature, which requires that a decision or outcome be independent of a protected attribute such as gender. BIBREF7. Similarly, when considering a probabilistic definition of unbiased in word embeddings, we can consider the conditional probabilities of word pairs, ensuring for example that $p(doctor|man) \\approx p(doctor|woman)$, and can extend this probabilistic framework to include the neighborhood of a target word, addressing the potential pitfalls of geometric bias mitigation.",
"Conveniently, most word embedding frameworks allow for immediate computation of the conditional probabilities $P(w|c)$. Here, we focus our attention on the Skip-Gram method with Negative Sampling (SGNS) of BIBREF8, although our framework can be equivalently instantiated for most other popular embedding methods, owing to their core similarities BIBREF6, BIBREF9. Leveraging this probabilistic nature, we construct a bias mitigation method in two steps, and examine each step as an independent method as well as the resulting composite method."
],
[
"This component of our bias mitigation framework seeks to enforce that the probability of prediction or outcome cannot depend on a protected class such as gender. We can formalize this intuitive goal through a loss function that penalizes the discrepancy between the conditional probabilities of a target word (i.e., one that should not be affected by the protected attribute) conditioned on two words describing the protected attribute (e.g., man and woman in the case of gender). That is, for every target word we seek to minimize:",
"where $\\mathcal {P} = \\lbrace (he,she),(man,woman),(king,queen), \\dots \\rbrace $ is a set of word pairs characterizing the protected attribute, akin to that used in previous work BIBREF0.",
"At this point, the specific form of the objective will depend on the type of word embeddings used. For our expample of SGNS, recall that this algorithm models the conditional probability of a target word given a context word as a function of the inner product of their representations. Though an exact method for calculating the conditional probability includes summing over conditional probability of all the words in the vocabulary, we can use the estimation of log conditional probability proposed by BIBREF8, i.e., $ \\log p(w_O|w_I) \\approx \\log \\sigma ({v^{\\prime }_{wo}}^T v_{wI}) + \\sum _{i=1}^{k} [\\log {\\sigma ({{-v^{\\prime }_{wi}}^T v_{wI}})}] $."
],
[
"Based on observations by BIBREF5, we extend our method to consider the composition of the neighborhood of socially-gendered words of a target word. We note that bias in a word embedding depends not only on the relationship between a target word and explicitly gendered words like man and woman, but also between a target word and socially-biased male or female words. Bolukbasi et al BIBREF0 proposed a method for eliminating this kind of indirect bias through geometric bias mitigation, but it is shown to be ineffective by the neighborhood metric BIBREF5.",
"Instead, we extend our method of bias mitigation to account for this neighborhood effect. Specifically, we examine the conditional probabilities of a target word given the $k/2$ nearest neighbors from the male socially-biased words as well as given the $k/2$ female socially-biased words (in sorted order, from smallest to largest). The groups of socially-biased words are constructed as described in the neighborhood metric. If the word is unbiased according to the neighborhood metric, these probabilities should be comparable. We then use the following as our loss function:",
"",
"where $m$ and $f$ represent the male and female neighbors sorted by distance to the target word $t$ (we use $L1$ distance)."
],
[
"We evaluate our framework on fastText embeddings trained on Wikipedia (2017), UMBC webbase corpus and statmt.org news dataset (16B tokens) BIBREF11. For simplicity, only the first 22000 words are used in all embeddings, though preliminary results indicate the findings extend to the full corpus. For our novel methods of mitigating bias, a shallow neural network is used to adjust the embedding. The single layer of the model is an embedding layer with weights initialized to those of the original embedding. For the composite method, these weights are initialized to those of the embedding after probabilistic bias mitigation. A batch of word indices is fed into the model, which are then embedded and for which a loss value is calculated, allowing back-propagation to adjust the embeddings. For each of the models, a fixed number of iterations is used to prevent overfitting, which can eventually hurt performance on the embedding benchmarks (See Figure FIGREF12). We evaluated the embedding after 1000 iterations, and stopped training if performance on a benchmark decreased significantly.",
"We construct a list of candidate words to debias, taken from the words used in the WEAT gender bias statistics. Words in this list should be gender neutral, and are related to the topics of career, arts, science, math, family and professions (see appendix). We note that this list can easily be expanded to include a greater proportion of words in the corpus. For example, BIBREF4 suggested a method for identifying inappropriately gendered words using unsupervised learning.",
"We compare this method of bias mitigation with the no bias mitigation (\"Orig\"), geometric bias mitigation (\"Geo\"), the two pieces of our method alone (\"Prob\" and \"KNN\") and the composite method (\"KNN+Prob\"). We note that the composite method performs reasonably well according the the RIPA metric, and much better than traditional geometric bias mitigation according to the neighborhood metric, without significant performance loss according to the accepted benchmarks. To our knowledge this is the first bias mitigation method to perform reasonably both on both metrics."
],
[
"We proposed a simple method of bias mitigation based on this probabilistic notions of fairness, and showed that it leads to promising results in various benchmark bias mitigation tasks. Future work should include considering a more rigorous definition and non-binary of bias and experimenting with various embedding algorithms and network architectures."
],
[
"The authors would like to thank Tommi Jaakkola for stimulating discussions during the initial stages of this work."
],
[
"For Equation 4, as described in the original work, in regards to the k sample words $w_i$ is drawn from the corpus using the Unigram distribution raised to the 3/4 power.",
"For reference, the most male socially-biased words include words such as:’john’, ’jr’, ’mlb’, ’dick’, ’nfl’, ’cfl’, ’sgt’, ’abbot’, ’halfback’, ’jock’, ’mike’, ’joseph’,while the most female socially-biased words include words such as:’feminine’, ’marital’, ’tatiana’, ’pregnancy’, ’eva’, ’pageant’, ’distress’, ’cristina’, ’ida’, ’beauty’, ’sexuality’,’fertility’"
],
[
"'accountant', 'acquaintance', 'actor', 'actress', 'administrator', 'adventurer', 'advocate', 'aide', 'alderman', 'ambassador', 'analyst', 'anthropologist', 'archaeologist', 'archbishop', 'architect', 'artist', 'assassin', 'astronaut', 'astronomer', 'athlete', 'attorney', 'author', 'baker', 'banker', 'barber', 'baron', 'barrister', 'bartender', 'biologist', 'bishop', 'bodyguard', 'boss', 'boxer', 'broadcaster', 'broker', 'businessman', 'butcher', 'butler', 'captain', 'caretaker', 'carpenter', 'cartoonist', 'cellist', 'chancellor', 'chaplain', 'character', 'chef', 'chemist', 'choreographer', 'cinematographer', 'citizen', 'cleric', 'clerk', 'coach', 'collector', 'colonel', 'columnist', 'comedian', 'comic', 'commander', 'commentator', 'commissioner', 'composer', 'conductor', 'confesses', 'congressman', 'constable', 'consultant', 'cop', 'correspondent', 'counselor', 'critic', 'crusader', 'curator', 'dad', 'dancer', 'dean', 'dentist', 'deputy', 'detective', 'diplomat', 'director', 'doctor', 'drummer', 'economist', 'editor', 'educator', 'employee', 'entertainer', 'entrepreneur', 'envoy', 'evangelist', 'farmer', 'filmmaker', 'financier', 'fisherman', 'footballer', 'foreman', 'gangster', 'gardener', 'geologist', 'goalkeeper', 'guitarist', 'headmaster', 'historian', 'hooker', 'illustrator', 'industrialist', 'inspector', 'instructor', 'inventor', 'investigator', 'journalist', 'judge', 'jurist', 'landlord', 'lawyer', 'lecturer', 'legislator', 'librarian', 'lieutenant', 'lyricist', 'maestro', 'magician', 'magistrate', 'maid', 'manager', 'marshal', 'mathematician', 'mechanic', 'midfielder', 'minister', 'missionary', 'monk', 'musician', 'nanny', 'narrator', 'naturalist', 'novelist', 'nun', 'nurse', 'observer', 'officer', 'organist', 'painter', 'pastor', 'performer', 'philanthropist', 'philosopher', 'photographer', 'physician', 'physicist', 'pianist', 'planner', 'playwright', 'poet', 'policeman', 'politician', 'preacher', 'president', 'priest', 'principal', 'prisoner', 'professor', 'programmer', 'promoter', 'proprietor', 'prosecutor', 'protagonist', 'provost', 'psychiatrist', 'psychologist', 'rabbi', 'ranger', 'researcher', 'sailor', 'saint', 'salesman', 'saxophonist', 'scholar', 'scientist', 'screenwriter', 'sculptor', 'secretary', 'senator', 'sergeant', 'servant', 'singer', 'skipper', 'sociologist', 'soldier', 'solicitor', 'soloist', 'sportsman', 'statesman', 'steward', 'student', 'substitute', 'superintendent', 'surgeon', 'surveyor', 'swimmer', 'teacher', 'technician', 'teenager', 'therapist', 'trader', 'treasurer', 'trooper', 'trumpeter', 'tutor', 'tycoon', 'violinist', 'vocalist', 'waiter', 'waitress', 'warden', 'warrior', 'worker', 'wrestler', 'writer'"
],
[
"Words used for WEAT statistic, consisting of baseline bias tests and gender bias tests in the format X vs Y / A vs B",
"Flowers vs Insects / Pleasant vs Unpleasant",
"X: \"aster\", \"clover\", \"hyacinth\", \"marigold\", \"poppy\", \"azalea\", \"crocus\", \"iris\", \"orchid\", \"rose\", \"bluebell\", \"daffodil\", \"lilac\", \"pansy\", \"tulip\", \"buttercup\", \"daisy\", \"lily\", \"peony\", \"violet\", \"carnation\", \"gladiola\", \"magnolia\", \"petunia\", \"zinnia\"",
"Y: \"ant\", \"caterpillar\", \"flea\", \"locust\", \"spider\", \"bedbug\", \"centipede\", \"fly\", \"maggot\", \"tarantula\", \"bee\", \"cockroach\", \"gnat\", \"mosquito\", \"termite\", \"beetle\", \"cricket\", \"hornet\", \"moth\", \"wasp\", \"blackfly\", \"dragonfly\", \"horsefly\", \"roach\", \"weevil\"",
"A: \"caress\", \"freedom\", \"health\", \"love\", \"peace\", \"cheer\", \"friend\", \"heaven\", \"loyal\", \"pleasure\", \"diamond\", \"gentle\", \"honest\", \"lucky\", \"rainbow\", \"diploma\", \"gift\", \"honor\", \"miracle\", \"sunrise\", \"family\", \"happy\", \"laughter\", \"paradise\", \"vacation\"",
"B: \"abuse\", \"crash\", \"filth\", \"murder\", \"sickness\", \"accident\", \"death\", \"grief\", \"poison\", \"stink\", \"assault\", \"disaster\", \"hatred\", \"pollute\", \"tragedy\", \"divorce\", \"jail\", \"poverty\", \"ugly\", \"cancer\", \"kill\", \"rotten\", \"vomit\", \"agony\", \"prison\"",
"Instruments vs Weapons / Pleasant vs Unpleasant:",
"X: \"bagpipe\", \"cello\", \"guitar\", \"lute\", \"trombone\", \"banjo\", \"clarinet\", \"harmonica\", \"mandolin\", \"trumpet\", \"bassoon\", \"drum\", \"harp\", \"oboe\", \"tuba\", \"bell\", \"fiddle\", \"harpsichord\", \"piano\", \"viola\", \"bongo\", \"flute\", \"horn\", \"saxophone\", \"violin\"",
"Y: \"arrow\", \"club\", \"gun\", \"missile\", \"spear\", \"ax\", \"dagger\", \"harpoon\", \"pistol\", \"sword\", \"blade\", \"dynamite\", \"hatchet\", \"rifle\", \"tank\", \"bomb\", \"firearm\", \"knife\", \"shotgun\", \"teargas\", \"cannon\", \"grenade\", \"mace\", \"slingshot\", \"whip\"",
"A: \"caress\", \"freedom\", \"health\", \"love\", \"peace\", \"cheer\", \"friend\", \"heaven\", \"loyal\", \"pleasure\", \"diamond\", \"gentle\", \"honest\", \"lucky\", \"rainbow\", \"diploma\", \"gift\", \"honor\", \"miracle\", \"sunrise\", \"family\", \"happy\", \"laughter\", \"paradise\", \"vacation\"",
"B: \"abuse\", \"crash\", \"filth\", \"murder\", \"sickness\", \"accident\", \"death\", \"grief\", \"poison\", \"stink\", \"assault\", \"disaster\", \"hatred\", \"pollute\", \"tragedy\", \"divorce\", \"jail\", \"poverty\", \"ugly\", \"cancer\", \"kill\", \"rotten\", \"vomit\", \"agony\", \"prison\"",
"Male vs Female / Career vs Family:",
"X: \"brother\", \"father\", \"uncle\", \"grandfather\", \"son\", \"he\", \"his\", \"him\", \"man\", \"himself\", \"men\", \"husband\", \"boy\", \"uncle\", \"nephew\", \"boyfriend\", \"king\", \"actor\"",
"Y: \"sister\", \"mother\", \"aunt\", \"grandmother\", \"daughter\", \"she\", \"hers\", \"her\", \"woman\", \"herself\", \"women\", \"wife\", \"aunt\", \"niece\", \"girlfriend\", \"queen\", \"actress\"",
"A: \"executive\", \"management\", \"professional\", \"corporation\", \"salary\", \"office\", \"business\", \"career\", \"industry\", \"company\", \"promotion\", \"profession\", \"CEO\", \"manager\", \"coworker\", \"entrepreneur\"",
"B: \"home\", \"parents\", \"children\", \"family\", \"cousins\", \"marriage\", \"wedding\", \"relatives\", \"grandparents\", \"grandchildren\", \"nurture\", \"child\", \"toddler\", \"infant\", \"teenager\"",
"Math vs Art / Male vs Female:",
"X: \"math\", \"algebra\", \"geometry\", \"calculus\", \"equations\", \"computation\", \"numbers\", \"addition\", \"trigonometry\", \"arithmetic\", \"logic\", \"proofs\", \"multiplication\", \"mathematics\"",
"Y: \"poetry\", \"art\", \"Shakespeare\", \"dance\", \"literature\", \"novel\", \"symphony\", \"drama\", \"orchestra\", \"music\", \"ballet\", \"arts\", \"creative\", \"sculpture\"",
"A: \"brother\", \"father\", \"uncle\", \"grandfather\", \"son\", \"he\", \"his\", \"him\", \"man\", \"himself\", \"men\", \"husband\", \"boy\", \"uncle\", \"nephew\", \"boyfriend\", \"king\", \"actor\"",
"B: \"sister\", \"mother\", \"aunt\", \"grandmother\", \"daughter\", \"she\", \"hers\", \"her\", \"woman\", \"herself\", \"women\", \"wife\", \"aunt\", \"niece\", \"girlfriend\", \"queen\", \"actress\"",
"Science vs Art / Male8 vs Female8:",
"X:\"science\", \"technology\", \"physics\", \"chemistry\", \"Einstein\", \"NASA\", \"experiment\", \"astronomy\", \"biology\", \"aeronautics\", \"mechanics\", \"thermodynamics\"",
"Y: \"poetry\", \"art\", \"Shakespeare\", \"dance\", \"literature\", \"novel\", \"symphony\", \"drama\", \"orchestra\", \"music\", \"ballet\", \"arts\", \"creative\", \"sculpture\"",
"A: \"brother\", \"father\", \"uncle\", \"grandfather\", \"son\", \"he\", \"his\", \"him\", \"man\", \"himself\", \"men\", \"husband\", \"boy\", \"uncle\", \"nephew\", \"boyfriend\"",
"B: \"sister\", \"mother\", \"aunt\", \"grandmother\", \"daughter\", \"she\", \"hers\", \"her\", \"woman\", \"herself\", \"women\", \"wife\", \"aunt\", \"niece\", \"girlfriend\""
]
]
} | {
"question": [
"How is embedding quality assessed?",
"What are the three measures of bias which are reduced in experiments?",
"What are the probabilistic observations which contribute to the more robust algorithm?"
],
"question_id": [
"47726be8641e1b864f17f85db9644ce676861576",
"8958465d1eaf81c8b781ba4d764a4f5329f026aa",
"31b6544346e9a31d656e197ad01756813ee89422"
],
"nlp_background": [
"five",
"five",
"five"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"bias",
"bias",
"bias"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"We compare this method of bias mitigation with the no bias mitigation (\"Orig\"), geometric bias mitigation (\"Geo\"), the two pieces of our method alone (\"Prob\" and \"KNN\") and the composite method (\"KNN+Prob\"). We note that the composite method performs reasonably well according the the RIPA metric, and much better than traditional geometric bias mitigation according to the neighborhood metric, without significant performance loss according to the accepted benchmarks. To our knowledge this is the first bias mitigation method to perform reasonably both on both metrics."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We evaluate our framework on fastText embeddings trained on Wikipedia (2017), UMBC webbase corpus and statmt.org news dataset (16B tokens) BIBREF11. For simplicity, only the first 22000 words are used in all embeddings, though preliminary results indicate the findings extend to the full corpus. For our novel methods of mitigating bias, a shallow neural network is used to adjust the embedding. The single layer of the model is an embedding layer with weights initialized to those of the original embedding. For the composite method, these weights are initialized to those of the embedding after probabilistic bias mitigation. A batch of word indices is fed into the model, which are then embedded and for which a loss value is calculated, allowing back-propagation to adjust the embeddings. For each of the models, a fixed number of iterations is used to prevent overfitting, which can eventually hurt performance on the embedding benchmarks (See Figure FIGREF12). We evaluated the embedding after 1000 iterations, and stopped training if performance on a benchmark decreased significantly.",
"We construct a list of candidate words to debias, taken from the words used in the WEAT gender bias statistics. Words in this list should be gender neutral, and are related to the topics of career, arts, science, math, family and professions (see appendix). We note that this list can easily be expanded to include a greater proportion of words in the corpus. For example, BIBREF4 suggested a method for identifying inappropriately gendered words using unsupervised learning.",
"We compare this method of bias mitigation with the no bias mitigation (\"Orig\"), geometric bias mitigation (\"Geo\"), the two pieces of our method alone (\"Prob\" and \"KNN\") and the composite method (\"KNN+Prob\"). We note that the composite method performs reasonably well according the the RIPA metric, and much better than traditional geometric bias mitigation according to the neighborhood metric, without significant performance loss according to the accepted benchmarks. To our knowledge this is the first bias mitigation method to perform reasonably both on both metrics."
],
"highlighted_evidence": [
"We evaluate our framework on fastText embeddings trained on Wikipedia (2017), UMBC webbase corpus and statmt.org news dataset (16B tokens) BIBREF11. For simplicity, only the first 22000 words are used in all embeddings, though preliminary results indicate the findings extend to the full corpus. For our novel methods of mitigating bias, a shallow neural network is used to adjust the embedding. The single layer of the model is an embedding layer with weights initialized to those of the original embedding. For the composite method, these weights are initialized to those of the embedding after probabilistic bias mitigation. A batch of word indices is fed into the model, which are then embedded and for which a loss value is calculated, allowing back-propagation to adjust the embeddings. For each of the models, a fixed number of iterations is used to prevent overfitting, which can eventually hurt performance on the embedding benchmarks (See Figure FIGREF12). We evaluated the embedding after 1000 iterations, and stopped training if performance on a benchmark decreased significantly.",
"We construct a list of candidate words to debias, taken from the words used in the WEAT gender bias statistics. Words in this list should be gender neutral, and are related to the topics of career, arts, science, math, family and professions (see appendix). We note that this list can easily be expanded to include a greater proportion of words in the corpus. For example, BIBREF4 suggested a method for identifying inappropriately gendered words using unsupervised learning.",
"We compare this method of bias mitigation with the no bias mitigation (\"Orig\"), geometric bias mitigation (\"Geo\"), the two pieces of our method alone (\"Prob\" and \"KNN\") and the composite method (\"KNN+Prob\"). We note that the composite method performs reasonably well according the the RIPA metric, and much better than traditional geometric bias mitigation according to the neighborhood metric, without significant performance loss according to the accepted benchmarks. To our knowledge this is the first bias mitigation method to perform reasonably both on both metrics."
]
},
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"50e0354ccb4d7d6fda33c34e69133daaa8978a2f",
"eb66f1f7e89eca5dcf2ae6ef450b1693a43f4e69"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "RIPA, Neighborhood Metric, WEAT",
"evidence": [
"Geometric bias mitigation uses the cosine distances between words to both measure and remove gender bias BIBREF0. This method implicitly defines bias as a geometric asymmetry between words when projected onto a subspace, such as the gender subspace constructed from a set of gender pairs such as $\\mathcal {P} = \\lbrace (he,she),(man,woman),(king,queen)...\\rbrace $. The projection of a vector $v$ onto $B$ (the subspace) is defined by $v_B = \\sum _{j=1}^{k} (v \\cdot b_j) b_j$ where a subspace $B$ is defined by k orthogonal unit vectors $B = {b_1,...,b_k}$.",
"The WEAT statistic BIBREF1 demonstrates the presence of biases in word embeddings with an effect size defined as the mean test statistic across the two word sets:",
"Where $s$, the test statistic, is defined as: $s(w,A,B) = mean_{a \\in A} cos(w,a) - mean_{b \\in B} cos(w,a)$, and $X$,$Y$,$A$, and $B$ are groups of words for which the association is measured. Possible values range from $-2$ to 2 depending on the association of the words groups, and a value of zero indicates $X$ and $Y$ are equally associated with $A$ and $B$. See BIBREF4 for further details on WEAT.",
"The RIPA (relational inner product association) metric was developed as an alternative to WEAT, with the critique that WEAT is likely to overestimate the bias of a target attribute BIBREF4. The RIPA metric formalizes the measure of bias used in geometric bias mitigation as the inner product association of a word vector $v$ with respect to a relation vector $b$. The relation vector is constructed from the first principal component of the differences between gender word pairs. We report the absolute value of the RIPA metric as the value can be positive or negative according to the direction of the bias. A value of zero indicates a lack of bias, and the value is bound by $[-||w||,||w||]$.",
"The neighborhood bias metric proposed by BIBREF5 quantifies bias as the proportion of male socially-biased words among the $k$ nearest socially-biased male and female neighboring words, whereby biased words are obtained by projecting neutral words onto a gender relation vector. As we only examine the target word among the 1000 most socially-biased words in the vocabulary (500 male and 500 female), a word’s bias is measured as the ratio of its neighborhood of socially-biased male and socially-biased female words, so that a value of 0.5 in this metric would indicate a perfectly unbiased word, and values closer to 0 and 1 indicate stronger bias.",
"FLOAT SELECTED: Table 1: Remaining Bias (as measured by RIPA and Neighborhood metrics) in fastText embeddings for baseline (top two rows) and our (bottom three) methods. Figure 2: Remaining Bias (WEAT score)"
],
"highlighted_evidence": [
"Geometric bias mitigation uses the cosine distances between words to both measure and remove gender bias BIBREF0.",
"The WEAT statistic BIBREF1 demonstrates the presence of biases in word embeddings with an effect size defined as the mean test statistic across the two word sets:\n\nWhere $s$, the test statistic, is defined as: $s(w,A,B) = mean_{a \\in A} cos(w,a) - mean_{b \\in B} cos(w,a)$, and $X$,$Y$,$A$, and $B$ are groups of words for which the association is measured.",
"The RIPA (relational inner product association) metric was developed as an alternative to WEAT, with the critique that WEAT is likely to overestimate the bias of a target attribute BIBREF4. ",
"The neighborhood bias metric proposed by BIBREF5 quantifies bias as the proportion of male socially-biased words among the $k$ nearest socially-biased male and female neighboring words, whereby biased words are obtained by projecting neutral words onto a gender relation vector.",
"FLOAT SELECTED: Table 1: Remaining Bias (as measured by RIPA and Neighborhood metrics) in fastText embeddings for baseline (top two rows) and our (bottom three) methods. Figure 2: Remaining Bias (WEAT score)"
]
}
],
"annotation_id": [
"08a22700ab88c5fb568745e6f7c1b5da25782626"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"9b4792d66cec53f8ea37bccd5cf7cb9c22290d82"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
]
} | {
"caption": [
"Figure 1: Word embedding semantic quality benchmarks for each bias mitigation method (higher is better). See Jastrzkebski et al. [11] for details of each metric.",
"Table 1: Remaining Bias (as measured by RIPA and Neighborhood metrics) in fastText embeddings for baseline (top two rows) and our (bottom three) methods. Figure 2: Remaining Bias (WEAT score)"
],
"file": [
"4-Figure1-1.png",
"4-Table1-1.png"
]
} |
1912.02481 | Massive vs. Curated Word Embeddings for Low-Resourced Languages. The Case of Yor\`ub\'a and Twi | The success of several architectures to learn semantic representations from unannotated text and the availability of these kind of texts in online multilingual resources such as Wikipedia has facilitated the massive and automatic creation of resources for multiple languages. The evaluation of such resources is usually done for the high-resourced languages, where one has a smorgasbord of tasks and test sets to evaluate on. For low-resourced languages, the evaluation is more difficult and normally ignored, with the hope that the impressive capability of deep learning architectures to learn (multilingual) representations in the high-resourced setting holds in the low-resourced setting too. In this paper we focus on two African languages, Yor\`ub\'a and Twi, and compare the word embeddings obtained in this way, with word embeddings obtained from curated corpora and a language-dependent processing. We analyse the noise in the publicly available corpora, collect high quality and noisy data for the two languages and quantify the improvements that depend not only on the amount of data but on the quality too. We also use different architectures that learn word representations both from surface forms and characters to further exploit all the available information which showed to be important for these languages. For the evaluation, we manually translate the wordsim-353 word pairs dataset from English into Yor\`ub\'a and Twi. As output of the work, we provide corpora, embeddings and the test suits for both languages. | {
"section_name": [
"Introduction",
"Related Work",
"Languages under Study ::: Yorùbá",
"Languages under Study ::: Twi",
"Data",
"Data ::: Training Corpora",
"Data ::: Evaluation Test Sets ::: Yorùbá.",
"Data ::: Evaluation Test Sets ::: Twi",
"Semantic Representations",
"Semantic Representations ::: Word Embeddings Architectures",
"Semantic Representations ::: Experiments ::: FastText Training and Evaluation",
"Semantic Representations ::: Experiments ::: CWE Training and Evaluation",
"Semantic Representations ::: Experiments ::: BERT Evaluation on NER Task",
"Summary and Discussion",
"Acknowledgements"
],
"paragraphs": [
[
"In recent years, word embeddings BIBREF0, BIBREF1, BIBREF2 have been proven to be very useful for training downstream natural language processing (NLP) tasks. Moreover, contextualized embeddings BIBREF3, BIBREF4 have been shown to further improve the performance of NLP tasks such as named entity recognition, question answering, or text classification when used as word features because they are able to resolve ambiguities of word representations when they appear in different contexts. Different deep learning architectures such as multilingual BERT BIBREF4, LASER BIBREF5 and XLM BIBREF6 have proved successful in the multilingual setting. All these architectures learn the semantic representations from unannotated text, making them cheap given the availability of texts in online multilingual resources such as Wikipedia. However, the evaluation of such resources is usually done for the high-resourced languages, where one has a smorgasbord of tasks and test sets to evaluate on. This is the best-case scenario, languages with tones of data for training that generate high-quality models.",
"For low-resourced languages, the evaluation is more difficult and therefore normally ignored simply because of the lack of resources. In these cases, training data is scarce, and the assumption that the capability of deep learning architectures to learn (multilingual) representations in the high-resourced setting holds in the low-resourced one does not need to be true. In this work, we focus on two African languages, Yorùbá and Twi, and carry out several experiments to verify this claim. Just by a simple inspection of the word embeddings trained on Wikipedia by fastText, we see a high number of non-Yorùbá or non-Twi words in the vocabularies. For Twi, the vocabulary has only 935 words, and for Yorùbá we estimate that 135 k out of the 150 k words belong to other languages such as English, French and Arabic.",
"In order to improve the semantic representations for these languages, we collect online texts and study the influence of the quality and quantity of the data in the final models. We also examine the most appropriate architecture depending on the characteristics of each language. Finally, we translate test sets and annotate corpora to evaluate the performance of both our models together with fastText and BERT pre-trained embeddings which could not be evaluated otherwise for Yorùbá and Twi. The evaluation is carried out in a word similarity and relatedness task using the wordsim-353 test set, and in a named entity recognition (NER) task where embeddings play a crucial role. Of course, the evaluation of the models in only two tasks is not exhaustive but it is an indication of the quality we can obtain for these two low-resourced languages as compared to others such as English where these evaluations are already available.",
"The rest of the paper is organized as follows. Related works are reviewed in Section SECREF2 The two languages under study are described in Section SECREF3. We introduce the corpora and test sets in Section SECREF4. The fifth section explores the different training architectures we consider, and the experiments that are carried out. Finally, discussion and concluding remarks are given in Section SECREF6"
],
[
"The large amount of freely available text in the internet for multiple languages is facilitating the massive and automatic creation of multilingual resources. The resource par excellence is Wikipedia, an online encyclopedia currently available in 307 languages. Other initiatives such as Common Crawl or the Jehovah’s Witnesses site are also repositories for multilingual data, usually assumed to be noisier than Wikipedia. Word and contextual embeddings have been pre-trained on these data, so that the resources are nowadays at hand for more than 100 languages. Some examples include fastText word embeddings BIBREF2, BIBREF7, MUSE embeddings BIBREF8, BERT multilingual embeddings BIBREF4 and LASER sentence embeddings BIBREF5. In all cases, embeddings are trained either simultaneously for multiple languages, joining high- and low-resource data, or following the same methodology.",
"On the other hand, different approaches try to specifically design architectures to learn embeddings in a low-resourced setting. ChaudharyEtAl:2018 follow a transfer learning approach that uses phonemes, lemmas and morphological tags to transfer the knowledge from related high-resource language into the low-resource one. jiangEtal:2018 apply Positive-Unlabeled Learning for word embedding calculations, assuming that unobserved pairs of words in a corpus also convey information, and this is specially important for small corpora.",
"In order to assess the quality of word embeddings, word similarity and relatedness tasks are usually used. wordsim-353 BIBREF9 is a collection of 353 pairs annotated with semantic similarity scores in a scale from 0 to 10. Even the problems detected in this dataset BIBREF10, it is widely used by the community. The test set was originally created for English, but the need for comparison with other languages has motivated several translations/adaptations. In hassanMihalcea:2009 the test was translated manually into Spanish, Romanian and Arabic and the scores were adapted to reflect similarities in the new language. The reported correlation between the English scores and the Spanish ones is 0.86. Later, JoubarneInkpen:2011 show indications that the measures of similarity highly correlate across languages. leviantReichart:2015 translated also wordsim-353 into German, Italian and Russian and used crowdsourcing to score the pairs. Finally, jiangEtal:2018 translated with Google Cloud the test set from English into Czech, Danish and Dutch. In our work, native speakers translate wordsim-353 into Yorùbá and Twi, and similarity scores are kept unless the discrepancy with English is big (see Section SECREF11 for details). A similar approach to our work is done for Gujarati in JoshiEtAl:2019."
],
[
"is a language in the West Africa with over 50 million speakers. It is spoken among other languages in Nigeria, republic of Togo, Benin Republic, Ghana and Sierra Leon. It is also a language of Òrìsà in Cuba, Brazil, and some Caribbean countries. It is one of the three major languages in Nigeria and it is regarded as the third most spoken native African language. There are different dialects of Yorùbá in Nigeria BIBREF11, BIBREF12, BIBREF13. However, in this paper our focus is the standard Yorùbá based upon a report from the 1974 Joint Consultative Committee on Education BIBREF14.",
"Standard Yorùbá has 25 letters without the Latin characters c, q, v, x and z. There are 18 consonants (b, d, f, g, gb, j[dz], k, l, m, n, p[kp], r, s, ṣ, t, w y[j]), 7 oral vowels (a, e, ẹ, i, o, ọ, u), five nasal vowels, (an, $ \\underaccent{\\dot{}}{e}$n, in, $ \\underaccent{\\dot{}}{o}$n, un) and syllabic nasals (m̀, ḿ, ǹ, ń). Yorùbá is a tone language which makes heavy use of lexical tones which are indicated by the use of diacritics. There are three tones in Yorùbá namely low, mid and high which are represented as grave ($\\setminus $), macron ($-$) and acute ($/$) symbols respectively. These tones are applied on vowels and syllabic nasals. Mid tone is usually left unmarked on vowels and every initial or first vowel in a word cannot have a high tone. It is important to note that tone information is needed for correct pronunciation and to have the meaning of a word BIBREF15, BIBREF12, BIBREF14. For example, owó (money), ọw (broom), òwò (business), w (honour), ọw (hand), and w (group) are different words with different dots and diacritic combinations. According to Asahiah2014, Standard Yorùbá uses 4 diacritics, 3 are for marking tones while the fourth which is the dot below is used to indicate the open phonetic variants of letter \"e\" and \"o\" and the long variant of \"s\". Also, there are 19 single diacritic letters, 3 are marked with dots below (ẹ, ọ, ṣ) while the rest are either having the grave or acute accent. The four double diacritics are divided between the grave and the acute accent as well.",
"As noted in Asahiah2014, most of the Yorùbá texts found in websites or public domain repositories (i) either use the correct Yorùbá orthography or (ii) replace diacritized characters with un-diacritized ones.",
"This happens as a result of many factors, but most especially to the unavailability of appropriate input devices for the accurate application of the diacritical marks BIBREF11. This has led to research on restoration models for diacritics BIBREF16, but the problem is not well solved and we find that most Yorùbá text in the public domain today is not well diacritized. Wikipedia is not an exception."
],
[
"is an Akan language of the Central Tano Branch of the Niger Congo family of languages. It is the most widely spoken of the about 80 indigenous languages in Ghana BIBREF17. It has about 9 million native speakers and about a total of 17–18 million Ghanaians have it as either first or second language. There are two mutually intelligible dialects, Asante and Akuapem, and sub-dialectical variants which are mostly unknown to and unnoticed by non-native speakers. It is also mutually intelligible with Fante and to a large extent Bono, another of the Akan languages. It is one of, if not the, easiest to learn to speak of the indigenous Ghanaian languages. The same is however not true when it comes to reading and especially writing. This is due to a number of easily overlooked complexities in the structure of the language. First of all, similarly to Yorùbá, Twi is a tonal language but written without diacritics or accents. As a result, words which are pronounced differently and unambiguous in speech tend to be ambiguous in writing. Besides, most of such words fit interchangeably in the same context and some of them can have more than two meanings. A simple example is:",
"Me papa aba nti na me ne wo redi no yie no. S wo ara wo nim s me papa ba a, me suban fofor adi.",
"This sentence could be translated as",
"(i) I'm only treating you nicely because I'm in a good mood. You already know I'm a completely different person when I'm in a good mood.",
"(ii) I'm only treating you nicely because my dad is around. You already know I'm a completely different person when my dad comes around.",
"Another characteristic of Twi is the fact that a good number of stop words have the same written form as content words. For instance, “na” or “na” could be the words “and, then”, the phrase “and then” or the word “mother”. This kind of ambiguity has consequences in several natural language applications where stop words are removed from text.",
"Finally, we want to point out that words can also be written with or without prefixes. An example is this same na and na which happen to be the same word with an omissible prefix across its multiple senses. For some words, the prefix characters are mostly used when the word begins a sentence and omitted in the middle. This however depends on the author/speaker. For the word embeddings calculation, this implies that one would have different embeddings for the same word found in different contexts."
],
[
"We collect clean and noisy corpora for Yorùbá and Twi in order to quantify the effect of noise on the quality of the embeddings, where noisy has a different meaning depending on the language as it will be explained in the next subsections."
],
[
"For Yorùbá, we use several corpora collected by the Niger-Volta Language Technologies Institute with texts from different sources, including the Lagos-NWU conversational speech corpus, fully-diacritized Yorùbá language websites and an online Bible. The largest source with clean data is the JW300 corpus. We also created our own small-sized corpus by web-crawling three Yorùbá language websites (Alàkwé, r Yorùbá and Èdè Yorùbá Rẹw in Table TABREF7), some Yoruba Tweets with full diacritics and also news corpora (BBC Yorùbá and VON Yorùbá) with poor diacritics which we use to introduce noise. By noisy corpus, we refer to texts with incorrect diacritics (e.g in BBC Yorùbá), removal of tonal symbols (e.g in VON Yorùbá) and removal of all diacritics/under-dots (e.g some articles in Yorùbá Wikipedia). Furthermore, we got two manually typed fully-diacritized Yorùbá literature (Ìrìnkèrindò nínú igbó elégbèje and Igbó Olódùmarè) both written by Daniel Orowole Olorunfemi Fagunwa a popular Yorùbá author. The number of tokens available from each source, the link to the original source and the quality of the data is summarised in Table TABREF7.",
"The gathering of clean data in Twi is more difficult. We use as the base text as it has been shown that the Bible is the most available resource for low and endangered languages BIBREF18. This is the cleanest of all the text we could obtain. In addition, we use the available (and small) Wikipedia dumps which are quite noisy, i.e. Wikipedia contains a good number of English words, spelling errors and Twi sentences formulated in a non-natural way (formulated as L2 speakers would speak Twi as compared to native speakers). Lastly, we added text crawled from jw and the JW300 Twi corpus. Notice that the Bible text, is mainly written in the Asante dialect whilst the last, Jehovah's Witnesses, was written mainly in the Akuapem dialect. The Wikipedia text is a mixture of the two dialects. This introduces a lot of noise into the embeddings as the spelling of most words differs especially at the end of the words due to the mixture of dialects. The JW300 Twi corpus also contains mixed dialects but is mainly Akuampem. In this case, the noise comes also from spelling errors and the uncommon addition of diacritics which are not standardised on certain vowels. Figures for Twi corpora are summarised in the bottom block of Table TABREF7."
],
[
"One of the contribution of this work is the introduction of the wordsim-353 word pairs dataset for Yorùbá. All the 353 word pairs were translated from English to Yorùbá by 3 native speakers. The set is composed of 446 unique English words, 348 of which can be expressed as one-word translation in Yorùbá (e.g. book translates to ìwé). In 61 cases (most countries and locations but also other content words) translations are transliterations (e.g. Doctor is dókítà and cucumber kùkúmbà.). 98 words were translated by short phrases instead of single words. This mostly affects words from science and technology (e.g. keyboard translates to pátákó ìtwé —literally meaning typing board—, laboratory translates to ìyàrá ìṣèwádìí —research room—, and ecology translates to ìm nípa àyíká while psychology translates to ìm nípa dá). Finally, 6 terms have the same form in English and Yorùbá therefore they are retained like that in the dataset (e.g. Jazz, Rock and acronyms such as FBI or OPEC).",
"We also annotate the Global Voices Yorùbá corpus to test the performance of our trained Yorùbá BERT embeddings on the named entity recognition task. The corpus consists of 25 k tokens which we annotate with four named entity types: DATE, location (LOC), organization (ORG) and personal names (PER). Any other token that does not belong to the four named entities is tagged with \"O\". The dataset is further split into training (70%), development (10%) and test (20%) partitions. Table TABREF12 shows the number of named entities per type and partition."
],
[
"Just like Yorùbá, the wordsim-353 word pairs dataset was translated for Twi. Out of the 353 word pairs, 274 were used in this case. The remaining 79 pairs contain words that translate into longer phrases.",
"The number of words that can be translated by a single token is higher than for Yorùbá. Within the 274 pairs, there are 351 unique English words which translated to 310 unique Twi words. 298 of the 310 Twi words are single word translations, 4 transliterations and 16 are used as is.",
"Even if JoubarneInkpen:2011 showed indications that semantic similarity has a high correlation across languages, different nuances between words are captured differently by languages. For instance, both money and currency in English translate into sika in Twi (and other 32 English words which translate to 14 Twi words belong to this category) and drink in English is translated as Nsa or nom depending on the part of speech (noun for the former, verb for the latter). 17 English words fall into this category. In translating these, we picked the translation that best suits the context (other word in the pair). In two cases, the correlation is not fulfilled at all: soap–opera and star–movies are not related in the Twi language and the score has been modified accordingly."
],
[
"In this section, we describe the architectures used for learning word embeddings for the Twi and Yorùbá languages. Also, we discuss the quality of the embeddings as measured by the correlation with human judgements on the translated wordSim-353 test sets and by the F1 score in a NER task."
],
[
"Modeling sub-word units has recently become a popular way to address out-of-vocabulary word problem in NLP especially in word representation learning BIBREF19, BIBREF2, BIBREF4. A sub-word unit can be a character, character $n$-grams, or heuristically learned Byte Pair Encodings (BPE) which work very well in practice especially for morphologically rich languages. Here, we consider two word embedding models that make use of character-level information together with word information: Character Word Embedding (CWE) BIBREF20 and fastText BIBREF2. Both of them are extensions of the Word2Vec architectures BIBREF0 that model sub-word units, character embeddings in the case of CWE and character $n$-grams for fastText.",
"CWE was introduced in 2015 to model the embeddings of characters jointly with words in order to address the issues of character ambiguities and non-compositional words especially in the Chinese language. A word or character embedding is learned in CWE using either CBOW or skipgram architectures, and then the final word embedding is computed by adding the character embeddings to the word itself:",
"where $w_j$ is the word embedding of $x_j$, $N_j$ is the number of characters in $x_j$, and $c_k$ is the embedding of the $k$-th character $c_k$ in $x_j$.",
"Similarly, in 2017 fastText was introduced as an extension to skipgram in order to take into account morphology and improve the representation of rare words. In this case the embedding of a word also includes the embeddings of its character $n$-grams:",
"where $w_j$ is the word embedding of $x_j$, $G_j$ is the number of character $n$-grams in $x_j$ and $g_k$ is the embedding of the $k$-th $n$-gram.",
"cwe also proposed three alternatives to learn multiple embeddings per character and resolve ambiguities: (i) position-based character embeddings where each character has different embeddings depending on the position it appears in a word, i.e., beginning, middle or end (ii) cluster-based character embeddings where a character can have $K$ different cluster embeddings, and (iii) position-based cluster embeddings (CWE-LP) where for each position $K$ different embeddings are learned. We use the latter in our experiments with CWE but no positional embeddings are used with fastText.",
"Finally, we consider a contextualized embedding architecture, BERT BIBREF4. BERT is a masked language model based on the highly efficient and parallelizable Transformer architecture BIBREF21 known to produce very rich contextualized representations for downstream NLP tasks.",
"The architecture is trained by jointly conditioning on both left and right contexts in all the transformer layers using two unsupervised objectives: Masked LM and Next-sentence prediction. The representation of a word is therefore learned according to the context it is found in.",
"Training contextual embeddings needs of huge amounts of corpora which are not available for low-resourced languages such as Yorùbá and Twi. However, Google provided pre-trained multilingual embeddings for 102 languages including Yorùbá (but not Twi)."
],
[
"As a first experiment, we compare the quality of fastText embeddings trained on (high-quality) curated data and (low-quality) massively extracted data for Twi and Yorùbá languages.",
"Facebook released pre-trained word embeddings using fastText for 294 languages trained on Wikipedia BIBREF2 (F1 in tables) and for 157 languages trained on Wikipedia and Common Crawl BIBREF7 (F2). For Yorùbá, both versions are available but only embeddings trained on Wikipedia are available for Twi. We consider these embeddings the result of training on what we call massively-extracted corpora. Notice that training settings for both embeddings are not exactly the same, and differences in performance might come both from corpus size/quality but also from the background model. The 294-languages version is trained using skipgram, in dimension 300, with character $n$-grams of length 5, a window of size 5 and 5 negatives. The 157-languages version is trained using CBOW with position-weights, in dimension 300, with character $n$-grams of length 5, a window of size 5 and 10 negatives.",
"We want to compare the performance of these embeddings with the equivalent models that can be obtained by training on the different sources verified by native speakers of Twi and Yorùbá; what we call curated corpora and has been described in Section SECREF4 For the comparison, we define 3 datasets according to the quality and quantity of textual data used for training: (i) Curated Small Dataset (clean), C1, about 1.6 million tokens for Yorùbá and over 735 k tokens for Twi. The clean text for Twi is the Bible and for Yoruba all texts marked under the C1 column in Table TABREF7. (ii) In Curated Small Dataset (clean + noisy), C2, we add noise to the clean corpus (Wikipedia articles for Twi, and BBC Yorùbá news articles for Yorùbá). This increases the number of training tokens for Twi to 742 k tokens and Yorùbá to about 2 million tokens. (iii) Curated Large Dataset, C3 consists of all available texts we are able to crawl and source out for, either clean or noisy. The addition of JW300 BIBREF22 texts increases the vocabulary to more than 10 k tokens in both languages.",
"We train our fastText systems using a skipgram model with an embedding size of 300 dimensions, context window size of 5, 10 negatives and $n$-grams ranging from 3 to 6 characters similarly to the pre-trained models for both languages. Best results are obtained with minimum word count of 3.",
"Table TABREF15 shows the Spearman correlation between human judgements and cosine similarity scores on the wordSim-353 test set. Notice that pre-trained embeddings on Wikipedia show a very low correlation with humans on the similarity task for both languages ($\\rho $=$0.14$) and their performance is even lower when Common Crawl is also considered ($\\rho $=$0.07$ for Yorùbá). An important reason for the low performance is the limited vocabulary. The pre-trained Twi model has only 935 tokens. For Yorùbá, things are apparently better with more than 150 k tokens when both Wikipedia and Common Crawl are used but correlation is even lower. An inspection of the pre-trained embeddings indicates that over 135 k words belong to other languages mostly English, French and Arabic.",
"If we focus only on Wikipedia, we see that many texts are without diacritics in Yorùbá and often make use of mixed dialects and English sentences in Twi.",
"The Spearman $\\rho $ correlation for fastText models on the curated small dataset (clean), C1, improves the baselines by a large margin ($\\rho =0.354$ for Twi and 0.322 for Yorùbá) even with a small dataset. The improvement could be justified just by the larger vocabulary in Twi, but in the case of Yorùbá the enhancement is there with almost half of the vocabulary size. We found out that adding some noisy texts (C2 dataset) slightly improves the correlation for Twi language but not for the Yorùbá language. The Twi language benefits from Wikipedia articles because its inclusion doubles the vocabulary and reduces the bias of the model towards religious texts. However, for Yorùbá, noisy texts often ignore diacritics or tonal marks which increases the vocabulary size at the cost of an increment in the ambiguity too. As a result, the correlation is slightly hurt. One would expect that training with more data would improve the quality of the embeddings, but we found out with the results obtained with the C3 dataset, that only high-quality data helps. The addition of JW300 boosts the vocabulary in both cases, but whereas for Twi the corpus mixes dialects and is noisy, for Yorùbá it is very clean and with full diacritics. Consequently, the best embeddings for Yorùbá are obtained when training with the C3 dataset, whereas for Twi, C2 is the best option. In both cases, the curated embeddings improve the correlation with human judgements on the similarity task a $\\Delta \\rho =+0.25$ or, equivalently, by an increment on $\\rho $ of 170% (Twi) and 180% (Yorùbá)."
],
[
"The huge ambiguity in the written Twi language motivates the exploration of different approaches to word embedding estimations. In this work, we compare the standard fastText methodology to include sub-word information with the character-enhanced approach with position-based clustered embeddings (CWE-LP as introduced in Section SECREF17). With the latter, we expect to specifically address the ambiguity present in a language that does not translate the different oral tones on vowels into the written language.",
"The character-enhanced word embeddings are trained using a skipgram architecture with cluster-based embeddings and an embedding size of 300 dimensions, context window-size of 5, and 5 negative samples. In this case, the best performance is obtained with a minimum word count of 1, and that increases the effective vocabulary that is used for training the embeddings with respect to the fastText experiments reported in Table TABREF15.",
"We repeat the same experiments as with fastText and summarise them in Table TABREF16. If we compare the relative numbers for the three datasets (C1, C2 and C3) we observe the same trends as before: the performance of the embeddings in the similarity task improves with the vocabulary size when the training data can be considered clean, but the performance diminishes when the data is noisy.",
"According to the results, CWE is specially beneficial for Twi but not always for Yorùbá. Clean Yorùbá text, does not have the ambiguity issues at character-level, therefore the $n$-gram approximation works better when enough clean data is used ($\\rho ^{C3}_{CWE}=0.354$ vs. $\\rho ^{C3}_{fastText}=0.391$) but it does not when too much noisy data (no diacritics, therefore character-level information would be needed) is used ($\\rho ^{C2}_{CWE}=0.345$ vs. $\\rho ^{C2}_{fastText}=0.302$). For Twi, the character-level information reinforces the benefits of clean data and the best correlation with human judgements is reached with CWE embeddings ($\\rho ^{C2}_{CWE}=0.437$ vs. $\\rho ^{C2}_{fastText}=0.388$)."
],
[
"In order to go beyond the similarity task using static word vectors, we also investigate the quality of the multilingual BERT embeddings by fine-tuning a named entity recognition task on the Yorùbá Global Voices corpus.",
"One of the major advantages of pre-trained BERT embeddings is that fine-tuning of the model on downstream NLP tasks is typically computationally inexpensive, often with few number of epochs. However, the data the embeddings are trained on has the same limitations as that used in massive word embeddings. Fine-tuning involves replacing the last layer of BERT used optimizing the masked LM with a task-dependent linear classifier or any other deep learning architecture, and training all the model parameters end-to-end. For the NER task, we obtain the token-level representation from BERT and train a linear classifier for sequence tagging.",
"Similar to our observations with non-contextualized embeddings, we find out that fine-tuning the pre-trained multilingual-uncased BERT for 4 epochs on the NER task gives an F1 score of 0. If we do the same experiment in English, F1 is 58.1 after 4 epochs.",
"That shows how pre-trained embeddings by themselves do not perform well in downstream tasks on low-resource languages. To address this problem for Yorùbá, we fine-tune BERT representations on the Yorùbá corpus in two ways: (i) using the multilingual vocabulary, and (ii) using only Yorùbá vocabulary. In both cases diacritics are ignored to be consistent with the base model training.",
"As expected, the fine-tuning of the pre-trained BERT on the Yorùbá corpus in the two configurations generates better representations than the base model. These models are able to achieve a better performance on the NER task with an average F1 score of over 47% (see Table TABREF26 for the comparative). The fine-tuned BERT model with only Yorùbá vocabulary further increases by more than 4% in F1 score obtained with the tuning that uses the multilingual vocabulary. Although we do not have enough data to train BERT from scratch, we observe that fine-tuning BERT on a limited amount of monolingual data of a low-resource language helps to improve the quality of the embeddings. The same observation holds true for high-resource languages like German and French BIBREF23."
],
[
"In this paper, we present curated word and contextual embeddings for Yorùbá and Twi. For this purpose, we gather and select corpora and study the most appropriate techniques for the languages. We also create test sets for the evaluation of the word embeddings within a word similarity task (wordsim353) and the contextual embeddings within a NER task. Corpora, embeddings and test sets are available in github.",
"In our analysis, we show how massively generated embeddings perform poorly for low-resourced languages as compared to the performance for high-resourced ones. This is due both to the quantity but also the quality of the data used. While the Pearson $\\rho $ correlation for English obtained with fastText embeddings trained on Wikipedia (WP) and Common Crawl (CC) are $\\rho _{WP}$=$0.67$ and $\\rho _{WP+CC}$=$0.78$, the equivalent ones for Yorùbá are $\\rho _{WP}$=$0.14$ and $\\rho _{WP+CC}$=$0.07$. For Twi, only embeddings with Wikipedia are available ($\\rho _{WP}$=$0.14$). By carefully gathering high-quality data and optimising the models to the characteristics of each language, we deliver embeddings with correlations of $\\rho $=$0.39$ (Yorùbá) and $\\rho $=$0.44$ (Twi) on the same test set, still far from the high-resourced models, but representing an improvement over $170\\%$ on the task.",
"In a low-resourced setting, the data quality, processing and model selection is more critical than in a high-resourced scenario. We show how the characteristics of a language (such as diacritization in our case) should be taken into account in order to choose the relevant data and model to use. As an example, Twi word embeddings are significantly better when training on 742 k selected tokens than on 16 million noisy tokens, and when using a model that takes into account single character information (CWE-LP) instead of $n$-gram information (fastText).",
"Finally, we want to note that, even within a corpus, the quality of the data might depend on the language. Wikipedia is usually used as a high-quality freely available multilingual corpus as compared to noisier data such as Common Crawl. However, for the two languages under study, Wikipedia resulted to have too much noise: interference from other languages, text clearly written by non-native speakers, lack of diacritics and mixture of dialects. The JW300 corpus on the other hand, has been rated as high-quality by our native Yorùbá speakers, but as noisy by our native Twi speakers. In both cases, experiments confirm the conclusions."
],
[
"The authors thank Dr. Clement Odoje of the Department of Linguistics and African Languages, University of Ibadan, Nigeria and Olóyè Gbémisóyè Àrdèó for helping us with the Yorùbá translation of the WordSim-353 word pairs and Dr. Felix Y. Adu-Gyamfi and Ps. Isaac Sarfo for helping with the Twi translation. We also thank the members of the Niger-Volta Language Technologies Institute for providing us with clean Yorùbá corpus",
"The project on which this paper is based was partially funded by the German Federal Ministry of Education and Research under the funding code 01IW17001 (Deeplee). Responsibility for the content of this publication is with the authors."
]
]
} | {
"question": [
"What turn out to be more important high volume or high quality data?",
"How much is model improved by massive data and how much by quality?",
"What two architectures are used?"
],
"question_id": [
"347e86893e8002024c2d10f618ca98e14689675f",
"10091275f777e0c2890c3ac0fd0a7d8e266b57cf",
"cbf1137912a47262314c94d36ced3232d5fa1926"
],
"nlp_background": [
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"only high-quality data helps"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The Spearman $\\rho $ correlation for fastText models on the curated small dataset (clean), C1, improves the baselines by a large margin ($\\rho =0.354$ for Twi and 0.322 for Yorùbá) even with a small dataset. The improvement could be justified just by the larger vocabulary in Twi, but in the case of Yorùbá the enhancement is there with almost half of the vocabulary size. We found out that adding some noisy texts (C2 dataset) slightly improves the correlation for Twi language but not for the Yorùbá language. The Twi language benefits from Wikipedia articles because its inclusion doubles the vocabulary and reduces the bias of the model towards religious texts. However, for Yorùbá, noisy texts often ignore diacritics or tonal marks which increases the vocabulary size at the cost of an increment in the ambiguity too. As a result, the correlation is slightly hurt. One would expect that training with more data would improve the quality of the embeddings, but we found out with the results obtained with the C3 dataset, that only high-quality data helps. The addition of JW300 boosts the vocabulary in both cases, but whereas for Twi the corpus mixes dialects and is noisy, for Yorùbá it is very clean and with full diacritics. Consequently, the best embeddings for Yorùbá are obtained when training with the C3 dataset, whereas for Twi, C2 is the best option. In both cases, the curated embeddings improve the correlation with human judgements on the similarity task a $\\Delta \\rho =+0.25$ or, equivalently, by an increment on $\\rho $ of 170% (Twi) and 180% (Yorùbá)."
],
"highlighted_evidence": [
"One would expect that training with more data would improve the quality of the embeddings, but we found out with the results obtained with the C3 dataset, that only high-quality data helps."
]
},
{
"unanswerable": false,
"extractive_spans": [
"high-quality"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The Spearman $\\rho $ correlation for fastText models on the curated small dataset (clean), C1, improves the baselines by a large margin ($\\rho =0.354$ for Twi and 0.322 for Yorùbá) even with a small dataset. The improvement could be justified just by the larger vocabulary in Twi, but in the case of Yorùbá the enhancement is there with almost half of the vocabulary size. We found out that adding some noisy texts (C2 dataset) slightly improves the correlation for Twi language but not for the Yorùbá language. The Twi language benefits from Wikipedia articles because its inclusion doubles the vocabulary and reduces the bias of the model towards religious texts. However, for Yorùbá, noisy texts often ignore diacritics or tonal marks which increases the vocabulary size at the cost of an increment in the ambiguity too. As a result, the correlation is slightly hurt. One would expect that training with more data would improve the quality of the embeddings, but we found out with the results obtained with the C3 dataset, that only high-quality data helps. The addition of JW300 boosts the vocabulary in both cases, but whereas for Twi the corpus mixes dialects and is noisy, for Yorùbá it is very clean and with full diacritics. Consequently, the best embeddings for Yorùbá are obtained when training with the C3 dataset, whereas for Twi, C2 is the best option. In both cases, the curated embeddings improve the correlation with human judgements on the similarity task a $\\Delta \\rho =+0.25$ or, equivalently, by an increment on $\\rho $ of 170% (Twi) and 180% (Yorùbá)."
],
"highlighted_evidence": [
"One would expect that training with more data would improve the quality of the embeddings, but we found out with the results obtained with the C3 dataset, that only high-quality data helps."
]
}
],
"annotation_id": [
"46dba8cddcfcbf57b2837040db3a5e9a5f7ceaa3",
"b5922f00879502196670bb7b26b229547de5fec4"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"863f554da4c30e1548dffc1da53632c9af7f005a"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"fastText",
"CWE-LP"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"As a first experiment, we compare the quality of fastText embeddings trained on (high-quality) curated data and (low-quality) massively extracted data for Twi and Yorùbá languages.",
"The huge ambiguity in the written Twi language motivates the exploration of different approaches to word embedding estimations. In this work, we compare the standard fastText methodology to include sub-word information with the character-enhanced approach with position-based clustered embeddings (CWE-LP as introduced in Section SECREF17). With the latter, we expect to specifically address the ambiguity present in a language that does not translate the different oral tones on vowels into the written language."
],
"highlighted_evidence": [
"As a first experiment, we compare the quality of fastText embeddings trained on (high-quality) curated data and (low-quality) massively extracted data for Twi and Yorùbá languages.",
"The huge ambiguity in the written Twi language motivates the exploration of different approaches to word embedding estimations. In this work, we compare the standard fastText methodology to include sub-word information with the character-enhanced approach with position-based clustered embeddings (CWE-LP as introduced in Section SECREF17)."
]
}
],
"annotation_id": [
"0961f0f256f4d2c31bcc6e188931422f79883a5a"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Table 1: Summary of the corpora used in the analysis. The last 3 columns indicate in which dataset (C1, C2 or C3) are the different sources included (see text, Section 5.2.).",
"Table 2: Number of tokens per named entity type in the Global Voices Yorùbá corpus.",
"Table 3: FastText embeddings: Spearman ρ correlation between human judgements and similarity scores on the wordSim353 for the three datasets analysed (C1, C2 and C3). The comparison with massive fastText embeddings is shown in the top rows.",
"Table 4: CWE embeddings: Spearman ρ correlation between human evaluation and embedding similarities for the three datasets analysed (C1, C2 and C3).",
"Table 5: NER F1 score on Global Voices Yorùbá corpus."
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"5-Table4-1.png",
"7-Table5-1.png"
]
} |
1810.04528 | Is there Gender bias and stereotype in Portuguese Word Embeddings? | In this work, we propose an analysis of the presence of gender bias associated with professions in Portuguese word embeddings. The objective of this work is to study gender implications related to stereotyped professions for women and men in the context of the Portuguese language. | {
"section_name": [
"Introduction",
"Related Work",
"Portuguese Embedding",
"Proposed Approach",
"Experiments",
"Final Remarks"
],
"paragraphs": [
[
"Recently, the transformative potential of machine learning (ML) has propelled ML into the forefront of mainstream media. In Brazil, the use of such technique has been widely diffused gaining more space. Thus, it is used to search for patterns, regularities or even concepts expressed in data sets BIBREF0 , and can be applied as a form of aid in several areas of everyday life.",
"Among the different definitions, ML can be seen as the ability to improve performance in accomplishing a task through the experience BIBREF1 . Thus, BIBREF2 presents this as a method of inferences of functions or hypotheses capable of solving a problem algorithmically from data representing instances of the problem. This is an important way to solve different types of problems that permeate computer science and other areas.",
"One of the main uses of ML is in text processing, where the analysis of the content the entry point for various learning algorithms. However, the use of this content can represent the insertion of different types of bias in training and may vary with the context worked. This work aims to analyze and remove gender stereotypes from word embedding in Portuguese, analogous to what was done in BIBREF3 for the English language. Hence, we propose to employ a public word2vec model pre-trained to analyze gender bias in the Portuguese language, quantifying biases present in the model so that it is possible to reduce the spreading of sexism of such models. There is also a stage of bias reducing over the results obtained in the model, where it is sought to analyze the effects of the application of gender distinction reduction techniques.",
"This paper is organized as follows: Section SECREF2 discusses related works. Section SECREF3 presents the Portuguese word2vec embeddings model used in this paper and Section SECREF4 proposes our method. Section SECREF5 presents experimental results, whose purpose is to verify results of a de-bias algorithm application in Portuguese embeddings word2vec model and a short discussion about it. Section SECREF6 brings our concluding remarks."
],
[
"There is a wide range of techniques that provide interesting results in the context of ML algorithms geared to the classification of data without discrimination; these techniques range from the pre-processing of data BIBREF4 to the use of bias removal techniques BIBREF5 in fact. Approaches linked to the data pre-processing step usually consist of methods based on improving the quality of the dataset after which the usual classification tools can be used to train a classifier. So, it starts from a baseline already stipulated by the execution of itself. On the other side of the spectrum, there are Unsupervised and semi-supervised learning techniques, that are attractive because they do not imply the cost of corpus annotation BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 .",
"The bias reduction is studied as a way to reduce discrimination through classification through different approaches BIBREF10 BIBREF11 . In BIBREF12 the authors propose to specify, implement, and evaluate the “fairness-aware\" ML interface called themis-ml. In this interface, the main idea is to pick up a data set from a modified dataset. Themis-ml implements two methods for training fairness-aware models. The tool relies on two methods to make agnostic model type predictions: Reject Option Classification and Discrimination-Aware Ensemble Classification, these procedures being used to post-process predictions in a way that reduces potentially discriminatory predictions. According to the authors, it is possible to perceive the potential use of the method as a means of reducing bias in the use of ML algorithms.",
"In BIBREF3 , the authors propose a method to hardly reduce bias in English word embeddings collected from Google News. Using word2vec, they performed a geometric analysis of gender direction of the bias contained in the data. Using this property with the generation of gender-neutral analogies, a methodology was provided for modifying an embedding to remove gender stereotypes. Some metrics were defined to quantify both direct and indirect gender biases in embeddings and to develop algorithms to reduce bias in some embedding. Hence, the authors show that embeddings can be used in applications without amplifying gender bias."
],
[
"In BIBREF13 , the quality of the representation of words through vectors in several models is discussed. According to the authors, the ability to train high-quality models using simplified architectures is useful in models composed of predictive methods that try to predict neighboring words with one or more context words, such as Word2Vec. Word embeddings have been used to provide meaningful representations for words in an efficient way.",
"In BIBREF14 , several word embedding models trained in a large Portuguese corpus are evaluated. Within the Word2Vec model, two training strategies were used. In the first, namely Skip-Gram, the model is given the word and attempts to predict its neighboring words. The second, Continuous Bag-of-Words (CBOW), the model is given the sequence of words without the middle one and attempts to predict this omitted word. The latter was chosen for application in the present proposal.",
"The authors of BIBREF14 claim to have collected a large corpus from several sources to obtain a multi-genre corpus representative of the Portuguese language. Hence, it comprehensively covers different expressions of the language, making it possible to analyze gender bias and stereotype in Portuguese word embeddings. The dataset used was tokenized and normalized by the authors to reduce the corpus vocabulary size, under the premise that vocabulary reduction provides more representative vectors."
],
[
"Some linguists point out that the female gender is, in Portuguese, a particularization of the masculine. In this way the only gender mark is the feminine, the others being considered without gender (including names considered masculine). In BIBREF15 the gender representation in Portuguese is associated with a set of phenomena, not only from a linguistic perspective but also from a socio-cultural perspective. Since most of the termination of words (e.g., advogada and advogado) are used to indicate to whom the expression refers, stereotypes can be explained through communication. This implies the presence of biases when dealing with terms such as those referring to professions.",
"Figure FIGREF1 illustrates the approach proposed in this work. First, using a list of professions relating the identification of female and male who perform it as a parameter, we evaluate the accuracy of similarity generated by the embeddings. Then, getting the biased results, we apply the De-bias algorithm BIBREF3 aiming to reduce sexist analogies previous generated. Thus, all the results are analyzed by comparing the accuracies.",
"Using the word2vec model available in a public repository BIBREF14 , the proposal involves the analysis of the most similar analogies generated before and after the application of the BIBREF3 . The work is focused on the analysis of gender bias associated with professions in word embeddings. So therefore into the evaluation of the accuracy of the associations generated, aiming at achieving results as good as possible without prejudicing the evaluation metrics.",
"Algorithm SECREF4 describes the method performed during the evaluation of the gender bias presence. In this method we try to evaluate the accuracy of the analogies generated through the model, that is, to verify the cases of association matching generated between the words.",
"[!htb] Model Evaluation [1]",
"w2v_evaluate INLINEFORM0 open_model( INLINEFORM1 ) count = 0 INLINEFORM2 in INLINEFORM3 read list of tuples x = model.most_similar(positive=[`ela', male], negative=[`ele'])",
"x = female count += 1 accuracy = count/size(profession_pairs) return accuracy"
],
[
"The purpose of this section is to perform different analysis concerning bias in word2vec models with Portuguese embeddings. The Continuous Bag-of-Words model used was provided by BIBREF14 (described in Section SECREF3 ). For these experiments, we use a model containing 934966 words of dimension 300 per vector representation. To realize the experiments, a list containing fifty professions labels for female and male was used as the parameter of similarity comparison.",
"Using the python library gensim, we evaluate the extreme analogies generated when comparing vectors like: INLINEFORM0 , where INLINEFORM1 represents the item from professions list and INLINEFORM2 the expected association. The most similarity function finds the top-N most similar entities, computing cosine similarity between a simple mean of the projection weight vectors of the given docs. Figure FIGREF4 presents the most extreme analogies results obtained from the model using these comparisons.",
"Applying the Algorithm SECREF4 , we check the accuracy obtained with the similarity function before and after the application of the de-bias method. Table TABREF3 presents the corresponding results. In cases like the analogy of `garçonete' to `stripper' (Figure FIGREF4 , line 8), it is possible to observe that the relationship stipulated between terms with sexual connotation and females is closer than between females and professions. While in the male model, even in cases of non-compliance, the closest analogy remains in the professional environment.",
"Using a confidence factor of 99%, when comparing the correctness levels of the model with and without the reduction of bias, the prediction of the model with bias is significantly better. Different authors BIBREF16 BIBREF17 show that the removal of bias in models produces a negative impact on the quality of the model. On the other hand, it is observed that even with a better hit rate the correctness rate in the prediction of related terms is still low."
],
[
"This paper presents an analysis of the presence of gender bias in Portuguese word embeddings. Even though it is a work in progress, the proposal showed promising results in analyzing predicting models.",
"A possible extension of the work involves deepening the analysis of the results obtained, seeking to achieve higher accuracy rates and fairer models to be used in machine learning techniques. Thus, these studies can involve tests with different methods of pre-processing the data to the use of different models, as well as other factors that may influence the results generated. This deepening is necessary since the model's accuracy is not high.",
"To conclude, we believe that the presence of gender bias and stereotypes in the Portuguese language is found in different spheres of language, and it is important to study ways of mitigating different types of discrimination. As such, it can be easily applied to analyze racists bias into the language, such as different types of preconceptions."
]
]
} | {
"question": [
"Does this paper target European or Brazilian Portuguese?",
"What were the word embeddings trained on?",
"Which word embeddings are analysed?"
],
"question_id": [
"519db0922376ce1e87fcdedaa626d665d9f3e8ce",
"99a10823623f78dbff9ccecb210f187105a196e9",
"09f0dce416a1e40cc6a24a8b42a802747d2c9363"
],
"nlp_background": [
"five",
"five",
"five"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"bias",
"bias",
"bias"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
},
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"0a93ba2daf6764079c983e70ca8609d6d1d8fa5c",
"c6686e4e6090f985be4cc72a08ca2d4948b355bb"
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"large Portuguese corpus"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In BIBREF14 , several word embedding models trained in a large Portuguese corpus are evaluated. Within the Word2Vec model, two training strategies were used. In the first, namely Skip-Gram, the model is given the word and attempts to predict its neighboring words. The second, Continuous Bag-of-Words (CBOW), the model is given the sequence of words without the middle one and attempts to predict this omitted word. The latter was chosen for application in the present proposal.",
"Using the word2vec model available in a public repository BIBREF14 , the proposal involves the analysis of the most similar analogies generated before and after the application of the BIBREF3 . The work is focused on the analysis of gender bias associated with professions in word embeddings. So therefore into the evaluation of the accuracy of the associations generated, aiming at achieving results as good as possible without prejudicing the evaluation metrics."
],
"highlighted_evidence": [
"In BIBREF14 , several word embedding models trained in a large Portuguese corpus are evaluated. ",
"Using the word2vec model available in a public repository BIBREF14 , the proposal involves the analysis of the most similar analogies generated before and after the application of the BIBREF3 . "
]
}
],
"annotation_id": [
"e0cd186397ec9543e48d25f5944fc9318542f1d5"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Continuous Bag-of-Words (CBOW)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In BIBREF14 , several word embedding models trained in a large Portuguese corpus are evaluated. Within the Word2Vec model, two training strategies were used. In the first, namely Skip-Gram, the model is given the word and attempts to predict its neighboring words. The second, Continuous Bag-of-Words (CBOW), the model is given the sequence of words without the middle one and attempts to predict this omitted word. The latter was chosen for application in the present proposal."
],
"highlighted_evidence": [
"The second, Continuous Bag-of-Words (CBOW), the model is given the sequence of words without the middle one and attempts to predict this omitted word. "
]
}
],
"annotation_id": [
"8b5278bfc35cf0a1b43ceb3418c2c5d20f213a31"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
} | {
"caption": [
"Fig. 1. Proposal",
"Fig. 2. Extreme Analogies"
],
"file": [
"3-Figure1-1.png",
"5-Figure2-1.png"
]
} |
2002.02224 | Citation Data of Czech Apex Courts | In this paper, we introduce the citation data of the Czech apex courts (Supreme Court, Supreme Administrative Court and Constitutional Court). This dataset was automatically extracted from the corpus of texts of Czech court decisions - CzCDC 1.0. We obtained the citation data by building the natural language processing pipeline for extraction of the court decision identifiers. The pipeline included the (i) document segmentation model and the (ii) reference recognition model. Furthermore, the dataset was manually processed to achieve high-quality citation data as a base for subsequent qualitative and quantitative analyses. The dataset will be made available to the general public. | {
"section_name": [
"Introduction",
"Related work ::: Legal Citation Analysis",
"Related work ::: Reference Recognition",
"Related work ::: Data Availability",
"Related work ::: Document Segmentation",
"Methodology",
"Methodology ::: Dataset and models ::: CzCDC 1.0 dataset",
"Methodology ::: Dataset and models ::: Reference recognition model",
"Methodology ::: Dataset and models ::: Text segmentation model",
"Methodology ::: Pipeline",
"Results",
"Discussion",
"Conclusion",
"Acknowledgment"
],
"paragraphs": [
[
"Analysis of the way court decisions refer to each other provides us with important insights into the decision-making process at courts. This is true both for the common law courts and for their counterparts in the countries belonging to the continental legal system. Citation data can be used for both qualitative and quantitative studies, casting light in the behavior of specific judges through document analysis or allowing complex studies into changing the nature of courts in transforming countries.",
"That being said, it is still difficult to create sufficiently large citation datasets to allow a complex research. In the case of the Czech Republic, it was difficult to obtain a relevant dataset of the court decisions of the apex courts (Supreme Court, Supreme Administrative Court and Constitutional Court). Due to its size, it is nearly impossible to extract the references manually. One has to reach out for an automation of such task. However, study of court decisions displayed many different ways that courts use to cite even decisions of their own, not to mention the decisions of other courts.The great diversity in citations led us to the use of means of the natural language processing for the recognition and the extraction of the citation data from court decisions of the Czech apex courts.",
"In this paper, we describe the tool ultimately used for the extraction of the references from the court decisions, together with a subsequent way of manual processing of the raw data to achieve a higher-quality dataset. Section SECREF2 maps the related work in the area of legal citation analysis (SectionSECREF1), reference recognition (Section SECREF2), text segmentation (Section SECREF4), and data availability (Section SECREF3). Section SECREF3 describes the method we used for the citation extraction, listing the individual models and the way we have combined these models into the NLP pipeline. Section SECREF4 presents results in the terms of evaluation of the performance of our pipeline, the statistics of the raw data, further manual processing and statistics of the final citation dataset. Section SECREF5 discusses limitations of our work and outlines the possible future development. Section SECREF6 concludes this paper."
],
[
"The legal citation analysis is an emerging phenomenon in the field of the legal theory and the legal empirical research.The legal citation analysis employs tools provided by the field of network analysis.",
"In spite of the long-term use of the citations in the legal domain (eg. the use of Shepard's Citations since 1873), interest in the network citation analysis increased significantly when Fowler et al. published the two pivotal works on the case law citations by the Supreme Court of the United States BIBREF0, BIBREF1. Authors used the citation data and network analysis to test the hypotheses about the function of stare decisis the doctrine and other issues of legal precedents. In the continental legal system, this work was followed by Winkels and de Ruyter BIBREF2. Authors adopted similar approach to Fowler to the court decisions of the Dutch Supreme Court. Similar methods were later used by Derlén and Lindholm BIBREF3, BIBREF4 and Panagis and Šadl BIBREF5 for the citation data of the Court of Justice of the European Union, and by Olsen and Küçüksu for the citation data of the European Court of Human Rights BIBREF6.",
"Additionally, a minor part in research in the legal network analysis resulted in the past in practical tools designed to help lawyers conduct the case law research. Kuppevelt and van Dijck built prototypes employing these techniques in the Netherlands BIBREF7. Görög a Weisz introduced the new legal information retrieval system, Justeus, based on a large database of the legal sources and partly on the network analysis methods. BIBREF8"
],
[
"The area of reference recognition already contains a large amount of work. It is concerned with recognizing text spans in documents that are referring to other documents. As such, it is a classical topic within the AI & Law literature.",
"The extraction of references from the Italian legislation based on regular expressions was reported by Palmirani et al. BIBREF9. The main goal was to bring references under a set of common standards to ensure the interoperability between different legal information systems.",
"De Maat et al. BIBREF10 focused on an automated detection of references to legal acts in Dutch language. Their approach consisted of a grammar covering increasingly complex citation patterns.",
"Opijnen BIBREF11 aimed for a reference recognition and a reference standardization using regular expressions accounting for multiple the variant of the same reference and multiple vendor-specific identifiers.",
"The language specific work by Kríž et al. BIBREF12 focused on the detecting and classification references to other court decisions and legal acts. Authors used a statistical recognition (HMM and Perceptron algorithms) and reported F1-measure over 90% averaged over all entities. It is the state-of-art in the automatic recognition of references in the Czech court decisions. Unfortunately, it allows only for the detection of docket numbers and it is unable to recognize court-specific or vendor-specific identifiers in the court decisions.",
"Other language specific-work includes our previous reference recognition model presented in BIBREF13. Prediction model is based on conditional random fields and it allows recognition of different constituents which then establish both explicit and implicit case-law and doctrinal references. Parts of this model were used in the pipeline described further within this paper in Section SECREF3."
],
[
"Large scale quantitative and qualitative studies are often hindered by the unavailability of court data. Access to court decisions is often hindered by different obstacles. In some countries, court decisions are not available at all, while in some other they are accessible only through legal information systems, often proprietary. This effectively restricts the access to court decisions in terms of the bulk data. This issue was already approached by many researchers either through making available selected data for computational linguistics studies or by making available datasets of digitized data for various purposes. Non-exhaustive list of publicly available corpora includes British Law Report Corpus BIBREF14, The Corpus of US Supreme Court Opinions BIBREF15,the HOLJ corpus BIBREF16, the Corpus of Historical English Law Reports, Corpus de Sentencias Penales BIBREF17, Juristisches Referenzkorpus BIBREF18 and many others.",
"Language specific work in this area is presented by the publicly available Czech Court Decisions Corpus (CzCDC 1.0) BIBREF19. This corpus contains majority of court decisions of the Czech Supreme Court, the Supreme Administrative Court and the Constitutional Court, hence allowing a large-scale extraction of references to yield representative results. The CzCDC 1.0 was used as a dataset for extraction of the references as is described further within this paper in Section SECREF3. Unfortunately, despite containing 237 723 court decisions issued between 1st January 1993 and 30th September 2018, it is not complete. This fact is reflected in the analysis of the results."
],
[
"A large volume of legal information is available in unstructured form, which makes processing these data a challenging task – both for human lawyers and for computers. Schweighofer BIBREF20 called for generic tools allowing a document segmentation to ease the processing of unstructured data by giving them some structure.",
"Topic-based segmentation often focuses on the identifying specific sentences that present borderlines of different textual segments.",
"The automatic segmentation is not an individual goal – it always serves as a prerequisite for further tasks requiring structured data. Segmentation is required for the text summarization BIBREF21, BIBREF22, keyword extraction BIBREF23, textual information retrieval BIBREF24, and other applications requiring input in the form of structured data.",
"Major part of research is focused on semantic similarity methods.The computing similarity between the parts of text presumes that a decrease of similarity means a topical border of two text segments. This approach was introduced by Hearst BIBREF22 and was used by Choi BIBREF25 and Heinonen BIBREF26 as well.",
"Another approach takes word frequencies and presumes a border according to different key words extracted. Reynar BIBREF27 authored graphical method based on statistics called dotplotting. Similar techniques were used by Ye BIBREF28 or Saravanan BIBREF29. Bommarito et al. BIBREF30 introduced a Python library combining different features including pre-trained models to the use for automatic legal text segmentation. Li BIBREF31 included neural network into his method to segment Chinese legal texts.",
"Šavelka and Ashley BIBREF32 similarly introduced the machine learning based approach for the segmentation of US court decisions texts into seven different parts. Authors reached high success rates in recognizing especially the Introduction and Analysis parts of the decisions.",
"Language specific work includes the model presented by Harašta et al. BIBREF33. This work focuses on segmentation of the Czech court decisions into pre-defined topical segments. Parts of this segmentation model were used in the pipeline described further within this paper in Section SECREF3."
],
[
"In this paper, we present and describe the citation dataset of the Czech top-tier courts. To obtain this dataset, we have processed the court decisions contained in CzCDC 1.0 dataset by the NLP pipeline consisting of the segmentation model introduced in BIBREF33, and parts of the reference recognition model presented in BIBREF13. The process is described in this section."
],
[
"Novotná and Harašta BIBREF19 prepared a dataset of the court decisions of the Czech Supreme Court, the Supreme Administrative Court and the Constitutional Court. The dataset contains 237,723 decisions published between 1st January 1993 and the 30th September 2018. These decisions are organised into three sub-corpora. The sub-corpus of the Supreme Court contains 111,977 decisions, the sub-corpus of the Supreme Administrative Court contains 52,660 decisions and the sub-corpus of the Constitutional Court contains 73,086 decisions. Authors in BIBREF19 assessed that the CzCDC currently contains approximately 91% of all decisions of the Supreme Court, 99,5% of all decisions of the Constitutional Court, and 99,9% of all decisions of the Supreme Administrative Court. As such, it presents the best currently available dataset of the Czech top-tier court decisions."
],
[
"Harašta and Šavelka BIBREF13 introduced a reference recognition model trained specifically for the Czech top-tier courts. Moreover, authors made their training data available in the BIBREF34. Given the lack of a single citation standard, references in this work consist of smaller units, because these were identified as more uniform and therefore better suited for the automatic detection. The model was trained using conditional random fields, which is a random field model that is globally conditioned on an observation sequence O. The states of the model correspond to event labels E. Authors used a first-order conditional random fields. Model was trained for each type of the smaller unit independently."
],
[
"Harašta et al. BIBREF33, authors introduced the model for the automatic segmentation of the Czech court decisions into pre-defined multi-paragraph parts. These segments include the Header (introduction of given case), History (procedural history prior the apex court proceeding), Submission/Rejoinder (petition of plaintiff and response of defendant), Argumentation (argumentation of the court hearing the case), Footer (legally required information, such as information about further proceedings), Dissent and Footnotes. The model for automatic segmentation of the text was trained using conditional random fields. The model was trained for each type independently."
],
[
"In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10.",
"As the first step, every document in the CzCDC 1.0 was segmented using the text segmentation model. This allowed us to treat different parts of processed court documents differently in the further text processing. Specifically, it allowed us to subject only the specific part of a court decision, in this case the court argumentation, to further the reference recognition and extraction. A textual segment recognised as the court argumentation is then processed further.",
"As the second step, parts recognised by the text segmentation model as a court argumentation was processed using the reference recognition model. After carefully studying the evaluation of the model's performance in BIBREF13, we have decided to use only part of the said model. Specifically, we have employed the recognition of the court identifiers, as we consider the rest of the smaller units introduced by Harašta and Šavelka of a lesser value for our task. Also, deploying only the recognition of the court identifiers allowed us to avoid the problematic parsing of smaller textual units into the references. The text spans recognised as identifiers of court decisions are then processed further.",
"At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification.",
"Further processing included:",
"control and repair of incompletely identified court identifiers (manual);",
"identification and sorting of identifiers as belonging to Supreme Court, Supreme Administrative Court or Constitutional Court (rule-based, manual);",
"standardisation of different types of court identifiers (rule-based, manual);",
"parsing of identifiers with court decisions available in CzCDC 1.0."
],
[
"Overall, through the process described in Section SECREF3, we have retrieved three datasets of extracted references - one dataset per each of the apex courts. These datasets consist of the individual pairs containing the identification of the decision from which the reference was retrieved, and the identification of the referred documents. As we only extracted references to other judicial decisions, we obtained 471,319 references from Supreme Court decisions, 167,237 references from Supreme Administrative Court decisions and 264,463 references from Constitutional Court Decisions. These are numbers of text spans identified as references prior the further processing described in Section SECREF3.",
"These references include all identifiers extracted from the court decisions contained in the CzCDC 1.0. Therefore, this number includes all other court decisions, including lower courts, the Court of Justice of the European Union, the European Court of Human Rights, decisions of other public authorities etc. Therefore, it was necessary to classify these into references referring to decisions of the Supreme Court, Supreme Administrative Court, Constitutional Court and others. These groups then underwent a standardisation - or more precisely a resolution - of different court identifiers used by the Czech courts. Numbers of the references resulting from this step are shown in Table TABREF16.",
"Following this step, we linked court identifiers with court decisions contained in the CzCDC 1.0. Given that, the CzCDC 1.0 does not contain all the decisions of the respective courts, we were not able to parse all the references. Numbers of the references resulting from this step are shown in Table TABREF17."
],
[
"This paper introduced the first dataset of citation data of the three Czech apex courts. Understandably, there are some pitfalls and limitations to our approach.",
"As we admitted in the evaluation in Section SECREF9, the models we included in our NLP pipelines are far from perfect. Overall, we were able to achieve a reasonable recall and precision rate, which was further enhanced by several round of manual processing of the resulting data. However, it is safe to say that we did not manage to extract all the references. Similarly, because the CzCDC 1.0 dataset we used does not contain all the decisions of the respective courts, we were not able to parse all court identifiers to the documents these refer to. Therefore, the future work in this area may include further development of the resources we used. The CzCDC 1.0 would benefit from the inclusion of more documents of the Supreme Court, the reference recognition model would benefit from more refined training methods etc.",
"That being said, the presented dataset is currently the only available resource of its kind focusing on the Czech court decisions that is freely available to research teams. This significantly reduces the costs necessary to conduct these types of studies involving network analysis, and the similar techniques requiring a large amount of citation data."
],
[
"In this paper, we have described the process of the creation of the first dataset of citation data of the three Czech apex courts. The dataset is publicly available for download at https://github.com/czech-case-law-relevance/czech-court-citations-dataset."
],
[
"J.H., and T.N. gratefully acknowledge the support from the Czech Science Foundation under grant no. GA-17-20645S. T.N. also acknowledges the institutional support of the Masaryk University. This paper was presented at CEILI Workshop on Legal Data Analysis held in conjunction with Jurix 2019 in Madrid, Spain."
]
]
} | {
"question": [
"Did they experiment on this dataset?",
"How is quality of the citation measured?",
"How big is the dataset?"
],
"question_id": [
"ac706631f2b3fa39bf173cd62480072601e44f66",
"8b71ede8170162883f785040e8628a97fc6b5bcb",
"fa2a384a23f5d0fe114ef6a39dced139bddac20e"
],
"nlp_background": [
"two",
"two",
"two"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10.",
"At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification."
],
"highlighted_evidence": [
"In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10.",
"At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification."
]
}
],
"annotation_id": [
"3bf5c275ced328b66fd9a07b30a4155fa476d779",
"ae80f5c5b782ad02d1dde21b7384bc63472f5796"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10.",
"At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification."
],
"highlighted_evidence": [
"In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10.",
"At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification."
]
}
],
"annotation_id": [
"ca22977516b8d2f165904d7e9742421ad8d742e2"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "903019 references",
"evidence": [
"Overall, through the process described in Section SECREF3, we have retrieved three datasets of extracted references - one dataset per each of the apex courts. These datasets consist of the individual pairs containing the identification of the decision from which the reference was retrieved, and the identification of the referred documents. As we only extracted references to other judicial decisions, we obtained 471,319 references from Supreme Court decisions, 167,237 references from Supreme Administrative Court decisions and 264,463 references from Constitutional Court Decisions. These are numbers of text spans identified as references prior the further processing described in Section SECREF3."
],
"highlighted_evidence": [
"As we only extracted references to other judicial decisions, we obtained 471,319 references from Supreme Court decisions, 167,237 references from Supreme Administrative Court decisions and 264,463 references from Constitutional Court Decisions. These are numbers of text spans identified as references prior the further processing described in Section SECREF3."
]
}
],
"annotation_id": [
"0bdc7f448e47059d71a0ad3c075303900370856a"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
]
} | {
"caption": [
"Figure 1: NLP pipeline including the text segmentation, reference recognition and parsing of references to the specific document",
"Table 1: Model performance",
"Table 2: References sorted by categories, unlinked",
"Table 3: References linked with texts in CzCDC"
],
"file": [
"4-Figure1-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png"
]
} |
2003.07433 | LAXARY: A Trustworthy Explainable Twitter Analysis Model for Post-Traumatic Stress Disorder Assessment | Veteran mental health is a significant national problem as large number of veterans are returning from the recent war in Iraq and continued military presence in Afghanistan. While significant existing works have investigated twitter posts-based Post Traumatic Stress Disorder (PTSD) assessment using blackbox machine learning techniques, these frameworks cannot be trusted by the clinicians due to the lack of clinical explainability. To obtain the trust of clinicians, we explore the big question, can twitter posts provide enough information to fill up clinical PTSD assessment surveys that have been traditionally trusted by clinicians? To answer the above question, we propose, LAXARY (Linguistic Analysis-based Exaplainable Inquiry) model, a novel Explainable Artificial Intelligent (XAI) model to detect and represent PTSD assessment of twitter users using a modified Linguistic Inquiry and Word Count (LIWC) analysis. First, we employ clinically validated survey tools for collecting clinical PTSD assessment data from real twitter users and develop a PTSD Linguistic Dictionary using the PTSD assessment survey results. Then, we use the PTSD Linguistic Dictionary along with machine learning model to fill up the survey tools towards detecting PTSD status and its intensity of corresponding twitter users. Our experimental evaluation on 210 clinically validated veteran twitter users provides promising accuracies of both PTSD classification and its intensity estimation. We also evaluate our developed PTSD Linguistic Dictionary's reliability and validity. | {
"section_name": [
"Introduction",
"Overview",
"Related Works",
"Demographics of Clinically Validated PTSD Assessment Tools",
"Twitter-based PTSD Detection",
"Twitter-based PTSD Detection ::: Data Collection",
"Twitter-based PTSD Detection ::: Pre-processing",
"Twitter-based PTSD Detection ::: PTSD Detection Baseline Model",
"LAXARY: Explainable PTSD Detection Model",
"LAXARY: Explainable PTSD Detection Model ::: PTSD Linguistic Dictionary Creation",
"LAXARY: Explainable PTSD Detection Model ::: Psychometric Validation of PTSD Linguistic Dictionary",
"LAXARY: Explainable PTSD Detection Model ::: Feature Extraction and Survey Score Estimation",
"Experimental Evaluation",
"Experimental Evaluation ::: Results",
"Challenges and Future Work",
"Conclusion"
],
"paragraphs": [
[
"Combat veterans diagnosed with PTSD are substantially more likely to engage in a number of high risk activities including engaging in interpersonal violence, attempting suicide, committing suicide, binge drinking, and drug abuse BIBREF0. Despite improved diagnostic screening, outpatient mental health and inpatient treatment for PTSD, the syndrome remains treatment resistant, is typically chronic, and is associated with numerous negative health effects and higher treatment costs BIBREF1. As a result, the Veteran Administration's National Center for PTSD (NCPTSD) suggests to reconceptualize PTSD not just in terms of a psychiatric symptom cluster, but focusing instead on the specific high risk behaviors associated with it, as these may be directly addressed though behavioral change efforts BIBREF0. Consensus prevalence estimates suggest that PTSD impacts between 15-20% of the veteran population which is typically chronic and treatment resistant BIBREF0. The PTSD patients support programs organized by different veterans peer support organization use a set of surveys for local weekly assessment to detect the intensity of PTSD among the returning veterans. However, recent advanced evidence-based care for PTSD sufferers surveys have showed that veterans, suffered with chronic PTSD are reluctant in participating assessments to the professionals which is another significant symptom of war returning veterans with PTSD. Several existing researches showed that, twitter posts of war veterans could be a significant indicator of their mental health and could be utilized to predict PTSD sufferers in time before going out of control BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. However, all of the proposed methods relied on either blackbox machine learning methods or language models based sentiments extraction of posted texts which failed to obtain acceptability and trust of clinicians due to the lack of their explainability.",
"In the context of the above research problem, we aim to answer the following research questions",
"Given clinicians have trust on clinically validated PTSD assessment surveys, can we fill out PTSD assessment surveys using twitter posts analysis of war-veterans?",
"If possible, what sort of analysis and approach are needed to develop such XAI model to detect the prevalence and intensity of PTSD among war-veterans only using the social media (twitter) analysis where users are free to share their everyday mental and social conditions?",
"How much quantitative improvement do we observe in our model's ability to explain both detection and intensity estimation of PTSD?",
"In this paper, we propose LAXARY, an explainable and trustworthy representation of PTSD classification and its intensity for clinicians.",
"The key contributions of our work are summarized below,",
"The novelty of LAXARY lies on the proposed clinical surveys-based PTSD Linguistic dictionary creation with words/aspects which represents the instantaneous perturbation of twitter-based sentiments as a specific pattern and help calculate the possible scores of each survey question.",
"LAXARY includes a modified LIWC model to calculate the possible scores of each survey question using PTSD Linguistic Dictionary to fill out the PTSD assessment surveys which provides a practical way not only to determine fine-grained discrimination of physiological and psychological health markers of PTSD without incurring the expensive and laborious in-situ laboratory testing or surveys, but also obtain trusts of clinicians who are expected to see traditional survey results of the PTSD assessment.",
"Finally, we evaluate the accuracy of LAXARY model performance and reliability-validity of generated PTSD Linguistic Dictionary using real twitter users' posts. Our results show that, given normal weekly messages posted in twitter, LAXARY can provide very high accuracy in filling up surveys towards identifying PTSD ($\\approx 96\\%$) and its intensity ($\\approx 1.2$ mean squared error)."
],
[
"Fig. FIGREF7 shows a schematic representation of our proposed model. It consists of the following logical steps: (i) Develop PTSD Detection System using twitter posts of war-veterans(ii) design real surveys from the popular symptoms based mental disease assessment surveys; (iii) define single category and create PTSD Linguistic Dictionary for each survey question and multiple aspect/words for each question; (iv) calculate $\\alpha $-scores for each category and dimension based on linguistic inquiry and word count as well as the aspects/words based dictionary; (v) calculate scaling scores ($s$-scores) for each dimension based on the $\\alpha $-scores and $s$-scores of each category based on the $s$-scores of its dimensions; (vi) rank features according to the contributions of achieving separation among categories associated with different $\\alpha $-scores and $s$-scores; and select feature sets that minimize the overlap among categories as associated with the target classifier (SGD); and finally (vii) estimate the quality of selected features-based classification for filling up surveys based on classified categories i.e. PTSD assessment which is trustworthy among the psychiatry community."
],
[
"Twitter activity based mental health assessment has been utmost importance to the Natural Language Processing (NLP) researchers and social media analysts for decades. Several studies have turned to social media data to study mental health, since it provides an unbiased collection of a person's language and behavior, which has been shown to be useful in diagnosing conditions. BIBREF9 used n-gram language model (CLM) based s-score measure setting up some user centric emotional word sets. BIBREF10 used positive and negative PTSD data to train three classifiers: (i) one unigram language model (ULM); (ii) one character n-gram language model (CLM); and 3) one from the LIWC categories $\\alpha $-scores and found that last one gives more accuracy than other ones. BIBREF11 used two types of $s$-scores taking the ratio of negative and positive language models. Differences in language use have been observed in the personal writing of students who score highly on depression scales BIBREF2, forum posts for depression BIBREF3, self narratives for PTSD (BIBREF4, BIBREF5), and chat rooms for bipolar BIBREF6. Specifically in social media, differences have previously been observed between depressed and control groups (as assessed by internet-administered batteries) via LIWC: depressed users more frequently use first person pronouns (BIBREF7) and more frequently use negative emotion words and anger words on Twitter, but show no differences in positive emotion word usage (BIBREF8). Similarly, an increase in negative emotion and first person pronouns, and a decrease in third person pronouns, (via LIWC) is observed, as well as many manifestations of literature findings in the pattern of life of depressed users (e.g., social engagement, demographics) (BIBREF12). Differences in language use in social media via LIWC have also been observed between PTSD and control groups (BIBREF13).",
"All of the prior works used some random dictionary related to the human sentiment (positive/negative) word sets as category words to estimate the mental health but very few of them addressed the problem of explainability of their solution to obtain trust of clinicians. Islam et. al proposed an explainable topic modeling framework to rank different mental health features using Local Interpretable Model-Agnostic Explanations and visualize them to understand the features involved in mental health status classification using the BIBREF14 which fails to provide trust of clinicians due to its lack of interpretability in clinical terms. In this paper, we develop LAXARY model where first we start investigating clinically validated survey tools which are trustworthy methods of PTSD assessment among clinicians, build our category sets based on the survey questions and use these as dictionary words in terms of first person singular number pronouns aspect for next level LIWC algorithm. Finally, we develop a modified LIWC algorithm to estimate survey scores (similar to sentiment category scores of naive LIWC) which is both explainable and trustworthy to clinicians."
],
[
"There are many clinically validated PTSD assessment tools that are being used both to detect the prevalence of PTSD and its intensity among sufferers. Among all of the tools, the most popular and well accepted one is Domain-Specific Risk-Taking (DOSPERT) Scale BIBREF15. This is a psychometric scale that assesses risk taking in five content domains: financial decisions (separately for investing versus gambling), health/safety, recreational, ethical, and social decisions. Respondents rate the likelihood that they would engage in domain-specific risky activities (Part I). An optional Part II assesses respondents' perceptions of the magnitude of the risks and expected benefits of the activities judged in Part I. There are more scales that are used in risky behavior analysis of individual's daily activities such as, The Berlin Social Support Scales (BSSS) BIBREF16 and Values In Action Scale (VIAS) BIBREF17. Dryhootch America BIBREF18, BIBREF19, a veteran peer support community organization, chooses 5, 6 and 5 questions respectively from the above mentioned survey systems to assess the PTSD among war veterans and consider rest of them as irrelevant to PTSD. The details of dryhootch chosen survey scale are stated in Table TABREF13. Table!TABREF14 shows a sample DOSPERT scale demographic chosen by dryhootch. The threshold (in Table TABREF13) is used to calculate the risky behavior limits. For example, if one individual's weekly DOSPERT score goes over 28, he is in critical situation in terms of risk taking symptoms of PTSD. Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )",
"High risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for all three PTSD assessment tools i.e. DOSPERT, BSSS and VIAS, then he/she is in high risk situation which needs immediate mental support to avoid catastrophic effect of individual's health or surrounding people's life.",
"Moderate risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any two of the three PTSD assessment tools, then he/she is in moderate risk situation which needs close observation and peer mentoring to avoid their risk progression.",
"Low risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any one of the three PTSD assessment tools, then he/she has light symptoms of PTSD.",
"No PTSD: If one individual veteran's weekly PTSD assessment scores go below the threshold for all three PTSD assessment tools, then he/she has no PTSD."
],
[
"To develop an explainable model, we first need to develop twitter-based PTSD detection algorithm. In this section, we describe the data collection and the development of our core LAXARY model."
],
[
"We use an automated regular expression based searching to find potential veterans with PTSD in twitter, and then refine the list manually. First, we select different keywords to search twitter users of different categories. For example, to search self-claimed diagnosed PTSD sufferers, we select keywords related to PTSD for example, post trauma, post traumatic disorder, PTSD etc. We use a regular expression to search for statements where the user self-identifies as being diagnosed with PTSD. For example, Table TABREF27 shows a self-identified tweet posts. To search veterans, we mostly visit to different twitter accounts of veterans organizations such as \"MA Women Veterans @WomenVeterans\", \"Illinois Veterans @ILVetsAffairs\", \"Veterans Benefits @VAVetBenefits\" etc. We define an inclusion criteria as follows: one twitter user will be part of this study if he/she describes himself/herself as a veteran in the introduction and have at least 25 tweets in last week. After choosing the initial twitter users, we search for self-identified PTSD sufferers who claim to be diagnosed with PTSD in their twitter posts. We find 685 matching tweets which are manually reviewed to determine if they indicate a genuine statement of a diagnosis for PTSD. Next, we select the username that authored each of these tweets and retrieve last week's tweets via the Twitter API. We then filtered out users with less than 25 tweets and those whose tweets were not at least 75% in English (measured using an automated language ID system.) This filtering left us with 305 users as positive examples. We repeated this process for a group of randomly selected users. We randomly selected 3,000 twitter users who are veterans as per their introduction and have at least 25 tweets in last one week. After filtering (as above) in total 2,423 users remain, whose tweets are used as negative examples developing a 2,728 user's entire weeks' twitter posts where 305 users are self-claimed PTSD sufferers. We distributed Dryhootch chosen surveys among 1,200 users (305 users are self claimed PTSD sufferers and rest of them are randomly chosen from previous 2,423 users) and received 210 successful responses. Among these responses, 92 users were diagnosed as PTSD by any of the three surveys and rest of the 118 users are diagnosed with NO PTSD. Among the clinically diagnosed PTSD sufferers, 17 of them were not self-identified before. However, 7 of the self-identified PTSD sufferers are assessed with no PTSD by PTSD assessment tools. The response rates of PTSD and NO PTSD users are 27% and 12%. In summary, we have collected one week of tweets from 2,728 veterans where 305 users claimed to have diagnosed with PTSD. After distributing Dryhootch surveys, we have a dataset of 210 veteran twitter users among them 92 users are assessed with PTSD and 118 users are diagnosed with no PTSD using clinically validated surveys. The severity of the PTSD are estimated as Non-existent, light, moderate and high PTSD based on how many surveys support the existence of PTSD among the participants according to dryhootch manual BIBREF18, BIBREF19."
],
[
"We download 210 users' all twitter posts who are war veterans and clinically diagnosed with PTSD sufferers as well which resulted a total 12,385 tweets. Fig FIGREF16 shows each of the 210 veteran twitter users' monthly average tweets. We categorize these Tweets into two groups: Tweets related to work and Tweets not related to work. That is, only the Tweets that use a form of the word “work*” (e.g. work,worked, working, worker, etc.) or “job*” (e.g. job, jobs, jobless, etc.) are identified as work-related Tweets, with the remaining categorized as non-work-related Tweets. This categorization method increases the likelihood that most Tweets in the work group are indeed talking about work or job; for instance, “Back to work. Projects are firing back up and moving ahead now that baseball is done.” This categorization results in 456 work-related Tweets, about 5.4% of all Tweets written in English (and 75 unique Twitter users). To conduct weekly-level analysis, we consider three categorizations of Tweets (i.e. overall Tweets, work-related Tweets, and non work-related Tweets) on a daily basis, and create a text file for each week for each group."
],
[
"We use Coppersmith proposed PTSD classification algorithm to develop our baseline blackbox model BIBREF11. We utilize our positive and negative PTSD data (+92,-118) to train three classifiers: (i) unigram language model (ULM) examining individual whole words, (ii) character n-gram language model (CLM), and (iii) LIWC based categorical models above all of the prior ones. The LMs have been shown effective for Twitter classification tasks BIBREF9 and LIWC has been previously used for analysis of mental health in Twitter BIBREF10. The language models measure the probability that a word (ULM) or a string of characters (CLM) was generated by the same underlying process as the training data. We first train one of each language model ($clm^{+}$ and $ulm^{+}$) from the tweets of PTSD users, and another model ($clm^{-}$ and $ulm^{-}$) from the tweets from No PTSD users. Each test tweet $t$ is scored by comparing probabilities from each LM called $s-score$",
"A threshold of 1 for $s-score$ divides scores into positive and negative classes. In a multi-class setting, the algorithm minimizes the cross entropy, selecting the model with the highest probability. For each user, we calculate the proportion of tweets scored positively by each LIWC category. These proportions are used as a feature vector in a loglinear regression model BIBREF20. Prior to training, we preprocess the text of each tweet: we replace all usernames with a single token (USER), lowercase all text, and remove extraneous whitespace. We also exclude any tweet that contained a URL, as these often pertain to events external to the user.",
"We conduct a LIWC analysis of the PTSD and non-PTSD tweets to determine if there are differences in the language usage of PTSD users. We applied the LIWC battery and examined the distribution of words in their language. Each tweet was tokenized by separating on whitespace. For each user, for a subset of the LIWC categories, we measured the proportion of tweets that contained at least one word from that category. Specifically, we examined the following nine categories: first, second and third person pronouns, swear, anger, positive emotion, negative emotion, death, and anxiety words. Second person pronouns were used significantly less often by PTSD users, while third person pronouns and words about anxiety were used significantly more often."
],
[
"The heart of LAXARY framework is the construction of PTSD Linguistic Dictionary. Prior works show that linguistic dictionary based text analysis has been much effective in twitter based sentiment analysis BIBREF21, BIBREF22. Our work is the first of its kind that develops its own linguistic dictionary to explain automatic PTSD assessment to confirm trustworthiness to clinicians."
],
[
"We use LIWC developed WordStat dictionary format for our text analysis BIBREF23. The LIWC application relies on an internal default dictionary that defines which words should be counted in the target text files. To avoid confusion in the subsequent discussion, text words that are read and analyzed by WordStat are referred to as target words. Words in the WordStat dictionary file will be referred to as dictionary words. Groups of dictionary words that tap a particular domain (e.g., negative emotion words) are variously referred to as subdictionaries or word categories. Fig FIGREF8 is a sample WordStat dictionary. There are several steps to use this dictionary which are stated as follows:",
"Pronoun selection: At first we have to define the pronouns of the target sentiment. Here we used first person singular number pronouns (i.e., I, me, mine etc.) that means we only count those sentences or segments which are only related to first person singular number i.e., related to the person himself.",
"Category selection: We have to define the categories of each word set thus we can analyze the categories as well as dimensions' text analysis scores. We chose three categories based on the three different surveys: 1) DOSPERT scale; 2) BSSS scale; and 3) VIAS scale.",
"Dimension selection: We have to define the word sets (also called dimension) for each category. We chose one dimension for each of the questions under each category to reflect real survey system evaluation. Our chosen categories are state in Fig FIGREF20.",
"Score calculation $\\alpha $-score: $\\alpha $-scores refer to the Cronbach's alphas for the internal reliability of the specific words within each category. The binary alphas are computed on the ratio of occurrence and non-occurrence of each dictionary word whereas the raw or uncorrected alphas are based on the percentage of use of each of the category words within texts."
],
[
"After the PTSD Linguistic Dictionary has been created, we empirically evaluate its psychometric properties such as reliability and validity as per American Standards for educational and psychological testing guideline BIBREF24. In psychometrics, reliability is most commonly evaluated by Cronbach's alpha, which assesses internal consistency based on inter-correlations and the number of measured items. In the text analysis scenario, each word in our PTSD Linguistic dictionary is considered an item, and reliability is calculated based on each text file's response to each word item, which forms an $N$(number of text files) $\\times $ $J$(number of words or stems in a dictionary) data matrix. There are two ways to quantify such responses: using percentage data (uncorrected method), or using \"present or not\" data (binary method) BIBREF23. For the uncorrected method, the data matrix comprises percentage values of each word/stem are calculated from each text file. For the binary method, the data matrix quantifies whether or not a word was used in a text file where \"1\" represents yes and \"0\" represents no. Once the data matrix is created, it is used to calculate Cronbach's alpha based on its inter-correlation matrix among the word percentages. We assess reliability based on our selected 210 users' Tweets which further generated a 23,562 response matrix after running the PTSD Linguistic Dictionary for each user. The response matrix yields reliability of .89 based on the uncorrected method, and .96 based on the binary method, which confirm the high reliability of our PTSD Dictionary created PTSD survey based categories. After assessing the reliability of the PTSD Linguistic dictionary, we focus on the two most common forms of construct validity: convergent validity and discriminant validity BIBREF25. Convergent validity provides evidence that two measures designed to assess the same construct are indeed related; discriminate validity involves evidence that two measures designed to assess different constructs are not too strongly related. In theory, we expect that the PTSD Linguistic dictionary should be positively correlated with other negative PTSD constructs to show convergent validity, and not strongly correlated with positive PTSD constructs to show discriminant validity. To test these two types of validity, we use the same 210 users' tweets used for the reliability assessment. The results revealed that the PTSD Linguistic dictionary is indeed positively correlated with negative construct dictionaries, including the overall negative PTSD dictionary (r=3.664,p$<$.001). Table TABREF25 shows all 16 categorical dictionaries. These results provide strong support for the measurement validity for our newly created PTSD Linguistic dictionary.",
""
],
[
"We use the exact similar method of LIWC to extract $\\alpha $-scores for each dimension and categories except we use our generated PTSD Linguistic Dictionary for the task BIBREF23. Thus we have total 16 $\\alpha $-scores in total. Meanwhile, we propose a new type of feature in this regard, which we called scaling-score ($s$-score). $s$-score is calculated from $\\alpha $-scores. The purpose of using $s$-score is to put exact scores of each of the dimension and category thus we can apply the same method used in real weekly survey system. The idea is, we divide each category into their corresponding scale factor (i.e., for DOSPERT scale, BSSS scale and VIAS scales) and divide them into 8, 3 and 5 scaling factors which are used in real survey system. Then we set the $s$-score from the scaling factors from the $\\alpha $-scores of the corresponding dimension of the questions. The algorithm is stated in Figure FIGREF23. Following Fig FIGREF23, we calculate the $s$-score for each dimension. Then we add up all the $s$-score of the dimensions to calculate cumulative $s$-score of particular categories which is displayed in Fig FIGREF22. Finally, we have total 32 features among them 16 are $\\alpha $-scores and 16 are $s$-scores for each category (i.e. each question). We add both of $\\alpha $ and $s$ scores together and scale according to their corresponding survey score scales using min-max standardization. Then, the final output is a 16 valued matrix which represent the score for each questions from three different Dryhootch surveys. We use the output to fill up each survey, estimate the prevalence of PTSD and its intensity based on each tool's respective evaluation metric."
],
[
"To validate the performance of LAXARY framework, we first divide the entire 210 users' twitter posts into training and test dataset. Then, we first developed PTSD Linguistic Dictionary from the twitter posts from training dataset and apply LAXARY framework on test dataset."
],
[
"To provide an initial results, we take 50% of users' last week's (the week they responded of having PTSD) data to develop PTSD Linguistic dictionary and apply LAXARY framework to fill up surveys on rest of 50% dataset. The distribution of this training-test dataset segmentation followed a 50% distribution of PTSD and No PTSD from the original dataset. Our final survey based classification results showed an accuracy of 96% in detecting PTSD and mean squared error of 1.2 in estimating its intensity given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively. Table TABREF29 shows the classification details of our experiment which provide the very good accuracy of our classification. To compare the outperformance of our method, we also implemented Coppersmith et. al. proposed method and achieved an 86% overall accuracy of detecting PTSD users BIBREF11 following the same training-test dataset distribution. Fig FIGREF28 illustrates the comparisons between LAXARY and Coppersmith et. al. proposed method. Here we can see, the outperformance of our proposed method as well as the importance of $s-score$ estimation. We also illustrates the importance of $\\alpha -score$ and $S-score$ in Fig FIGREF30. Fig FIGREF30 illustrates that if we change the number of training samples (%), LAXARY models outperforms Coppersmith et. al. proposed model under any condition. In terms of intensity, Coppersmith et. al. totally fails to provide any idea however LAXARY provides extremely accurate measures of intensity estimation for PTSD sufferers (as shown in Fig FIGREF31) which can be explained simply providing LAXARY model filled out survey details. Table TABREF29 shows the details of accuracies of both PTSD detection and intensity estimation. Fig FIGREF32 shows the classification accuracy changes over the training sample sizes for each survey which shows that DOSPERT scale outperform other surveys. Fig FIGREF33 shows that if we take previous weeks (instead of only the week diagnosis of PTSD was taken), there are no significant patterns of PTSD detection."
],
[
"LAXARY is a highly ambitious model that targets to fill up clinically validated survey tools using only twitter posts. Unlike the previous twitter based mental health assessment tools, LAXARY provides a clinically interpretable model which can provide better classification accuracy and intensity of PTSD assessment and can easily obtain the trust of clinicians. The central challenge of LAXARY is to search twitter users from twitter search engine and manually label them for analysis. While developing PTSD Linguistic Dictionary, although we followed exactly same development idea of LIWC WordStat dictionary and tested reliability and validity, our dictionary was not still validated by domain experts as PTSD detection is highly sensitive issue than stress/depression detection. Moreover, given the extreme challenges of searching veterans in twitter using our selection and inclusion criteria, it was extremely difficult to manually find the evidence of the self-claimed PTSD sufferers. Although, we have shown extremely promising initial findings about the representation of a blackbox model into clinically trusted tools, using only 210 users' data is not enough to come up with a trustworthy model. Moreover, more clinical validation must be done in future with real clinicians to firmly validate LAXARY model provided PTSD assessment outcomes. In future, we aim to collect more data and run not only nationwide but also international-wide data collection to establish our innovation into a real tool. Apart from that, as we achieved promising results in detecting PTSD and its intensity using only twitter data, we aim to develop Linguistic Dictionary for other mental health issues too. Moreover, we will apply our proposed method in other types of mental illness such as depression, bipolar disorder, suicidal ideation and seasonal affective disorder (SAD) etc. As we know, accuracy of particular social media analysis depends on the dataset mostly. We aim to collect more data engaging more researchers to establish a set of mental illness specific Linguistic Database and evaluation technique to solidify the genralizability of our proposed method."
],
[
"To promote better comfort to the trauma patients, it is really important to detect Post Traumatic Stress Disorder (PTSD) sufferers in time before going out of control that may result catastrophic impacts on society, people around or even sufferers themselves. Although, psychiatrists invented several clinical diagnosis tools (i.e., surveys) by assessing symptoms, signs and impairment associated with PTSD, most of the times, the process of diagnosis happens at the severe stage of illness which may have already caused some irreversible damages of mental health of the sufferers. On the other hand, due to lack of explainability, existing twitter based methods are not trusted by the clinicians. In this paper, we proposed, LAXARY, a novel method of filling up PTSD assessment surveys using weekly twitter posts. As the clinical surveys are trusted and understandable method, we believe that this method will be able to gain trust of clinicians towards early detection of PTSD. Moreover, our proposed LAXARY model, which is first of its kind, can be used to develop any type of mental disorder Linguistic Dictionary providing a generalized and trustworthy mental health assessment framework of any kind."
]
]
} | {
"question": [
"Do they evaluate only on English datasets?",
"Do the authors mention any possible confounds in this study?",
"How is the intensity of the PTSD established?",
"How is LIWC incorporated into this system?",
"How many twitter users are surveyed using the clinically validated survey?",
"Which clinically validated survey tools are used?"
],
"question_id": [
"53712f0ce764633dbb034e550bb6604f15c0cacd",
"0bffc3d82d02910d4816c16b390125e5df55fd01",
"bdd8368debcb1bdad14c454aaf96695ac5186b09",
"3334f50fe1796ce0df9dd58540e9c08be5856c23",
"7081b6909cb87b58a7b85017a2278275be58bf60",
"1870f871a5bcea418c44f81f352897a2f53d0971"
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter",
"twitter",
"twitter"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"4e3a79dc56c6f39d1bec7bac257c57f279431967"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"fcf589c48d32bdf0ef4eab547f9ae22412f5805a"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively, the estimated intensity is established as mean squared error.",
"evidence": [
"To provide an initial results, we take 50% of users' last week's (the week they responded of having PTSD) data to develop PTSD Linguistic dictionary and apply LAXARY framework to fill up surveys on rest of 50% dataset. The distribution of this training-test dataset segmentation followed a 50% distribution of PTSD and No PTSD from the original dataset. Our final survey based classification results showed an accuracy of 96% in detecting PTSD and mean squared error of 1.2 in estimating its intensity given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively. Table TABREF29 shows the classification details of our experiment which provide the very good accuracy of our classification. To compare the outperformance of our method, we also implemented Coppersmith et. al. proposed method and achieved an 86% overall accuracy of detecting PTSD users BIBREF11 following the same training-test dataset distribution. Fig FIGREF28 illustrates the comparisons between LAXARY and Coppersmith et. al. proposed method. Here we can see, the outperformance of our proposed method as well as the importance of $s-score$ estimation. We also illustrates the importance of $\\alpha -score$ and $S-score$ in Fig FIGREF30. Fig FIGREF30 illustrates that if we change the number of training samples (%), LAXARY models outperforms Coppersmith et. al. proposed model under any condition. In terms of intensity, Coppersmith et. al. totally fails to provide any idea however LAXARY provides extremely accurate measures of intensity estimation for PTSD sufferers (as shown in Fig FIGREF31) which can be explained simply providing LAXARY model filled out survey details. Table TABREF29 shows the details of accuracies of both PTSD detection and intensity estimation. Fig FIGREF32 shows the classification accuracy changes over the training sample sizes for each survey which shows that DOSPERT scale outperform other surveys. Fig FIGREF33 shows that if we take previous weeks (instead of only the week diagnosis of PTSD was taken), there are no significant patterns of PTSD detection."
],
"highlighted_evidence": [
" Our final survey based classification results showed an accuracy of 96% in detecting PTSD and mean squared error of 1.2 in estimating its intensity given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively. "
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "defined into four categories from high risk, moderate risk, to low risk",
"evidence": [
"There are many clinically validated PTSD assessment tools that are being used both to detect the prevalence of PTSD and its intensity among sufferers. Among all of the tools, the most popular and well accepted one is Domain-Specific Risk-Taking (DOSPERT) Scale BIBREF15. This is a psychometric scale that assesses risk taking in five content domains: financial decisions (separately for investing versus gambling), health/safety, recreational, ethical, and social decisions. Respondents rate the likelihood that they would engage in domain-specific risky activities (Part I). An optional Part II assesses respondents' perceptions of the magnitude of the risks and expected benefits of the activities judged in Part I. There are more scales that are used in risky behavior analysis of individual's daily activities such as, The Berlin Social Support Scales (BSSS) BIBREF16 and Values In Action Scale (VIAS) BIBREF17. Dryhootch America BIBREF18, BIBREF19, a veteran peer support community organization, chooses 5, 6 and 5 questions respectively from the above mentioned survey systems to assess the PTSD among war veterans and consider rest of them as irrelevant to PTSD. The details of dryhootch chosen survey scale are stated in Table TABREF13. Table!TABREF14 shows a sample DOSPERT scale demographic chosen by dryhootch. The threshold (in Table TABREF13) is used to calculate the risky behavior limits. For example, if one individual's weekly DOSPERT score goes over 28, he is in critical situation in terms of risk taking symptoms of PTSD. Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )",
"High risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for all three PTSD assessment tools i.e. DOSPERT, BSSS and VIAS, then he/she is in high risk situation which needs immediate mental support to avoid catastrophic effect of individual's health or surrounding people's life.",
"Moderate risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any two of the three PTSD assessment tools, then he/she is in moderate risk situation which needs close observation and peer mentoring to avoid their risk progression.",
"Low risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any one of the three PTSD assessment tools, then he/she has light symptoms of PTSD.",
"No PTSD: If one individual veteran's weekly PTSD assessment scores go below the threshold for all three PTSD assessment tools, then he/she has no PTSD."
],
"highlighted_evidence": [
"Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )\n\nHigh risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for all three PTSD assessment tools i.e. DOSPERT, BSSS and VIAS, then he/she is in high risk situation which needs immediate mental support to avoid catastrophic effect of individual's health or surrounding people's life.\n\nModerate risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any two of the three PTSD assessment tools, then he/she is in moderate risk situation which needs close observation and peer mentoring to avoid their risk progression.\n\nLow risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any one of the three PTSD assessment tools, then he/she has light symptoms of PTSD.\n\nNo PTSD: If one individual veteran's weekly PTSD assessment scores go below the threshold for all three PTSD assessment tools, then he/she has no PTSD."
]
}
],
"annotation_id": [
"5fb7cea5f88219c0c6b7de07c638124a52ef5701",
"b62b56730f7536bfcb03b0e784d74674badcc806"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" For each user, we calculate the proportion of tweets scored positively by each LIWC category."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"A threshold of 1 for $s-score$ divides scores into positive and negative classes. In a multi-class setting, the algorithm minimizes the cross entropy, selecting the model with the highest probability. For each user, we calculate the proportion of tweets scored positively by each LIWC category. These proportions are used as a feature vector in a loglinear regression model BIBREF20. Prior to training, we preprocess the text of each tweet: we replace all usernames with a single token (USER), lowercase all text, and remove extraneous whitespace. We also exclude any tweet that contained a URL, as these often pertain to events external to the user."
],
"highlighted_evidence": [
"For each user, we calculate the proportion of tweets scored positively by each LIWC category. "
]
},
{
"unanswerable": false,
"extractive_spans": [
"to calculate the possible scores of each survey question using PTSD Linguistic Dictionary "
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"LAXARY includes a modified LIWC model to calculate the possible scores of each survey question using PTSD Linguistic Dictionary to fill out the PTSD assessment surveys which provides a practical way not only to determine fine-grained discrimination of physiological and psychological health markers of PTSD without incurring the expensive and laborious in-situ laboratory testing or surveys, but also obtain trusts of clinicians who are expected to see traditional survey results of the PTSD assessment."
],
"highlighted_evidence": [
"LAXARY includes a modified LIWC model to calculate the possible scores of each survey question using PTSD Linguistic Dictionary to fill out the PTSD assessment surveys which provides a practical way not only to determine fine-grained discrimination of physiological and psychological health markers of PTSD without incurring the expensive and laborious in-situ laboratory testing or surveys, but also obtain trusts of clinicians who are expected to see traditional survey results of the PTSD assessment."
]
}
],
"annotation_id": [
"348b89ed7cf9b893cd45d99de412e0f424f97f2a",
"9a5f2c8b73ad98f1e28c384471b29b92bcf38de5"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"210"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We download 210 users' all twitter posts who are war veterans and clinically diagnosed with PTSD sufferers as well which resulted a total 12,385 tweets. Fig FIGREF16 shows each of the 210 veteran twitter users' monthly average tweets. We categorize these Tweets into two groups: Tweets related to work and Tweets not related to work. That is, only the Tweets that use a form of the word “work*” (e.g. work,worked, working, worker, etc.) or “job*” (e.g. job, jobs, jobless, etc.) are identified as work-related Tweets, with the remaining categorized as non-work-related Tweets. This categorization method increases the likelihood that most Tweets in the work group are indeed talking about work or job; for instance, “Back to work. Projects are firing back up and moving ahead now that baseball is done.” This categorization results in 456 work-related Tweets, about 5.4% of all Tweets written in English (and 75 unique Twitter users). To conduct weekly-level analysis, we consider three categorizations of Tweets (i.e. overall Tweets, work-related Tweets, and non work-related Tweets) on a daily basis, and create a text file for each week for each group."
],
"highlighted_evidence": [
"We download 210 users' all twitter posts who are war veterans and clinically diagnosed with PTSD sufferers as well which resulted a total 12,385 tweets. "
]
}
],
"annotation_id": [
"10d346425fb3693cdf36e224fb28ca37d57b71a0"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"DOSPERT, BSSS and VIAS"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We use an automated regular expression based searching to find potential veterans with PTSD in twitter, and then refine the list manually. First, we select different keywords to search twitter users of different categories. For example, to search self-claimed diagnosed PTSD sufferers, we select keywords related to PTSD for example, post trauma, post traumatic disorder, PTSD etc. We use a regular expression to search for statements where the user self-identifies as being diagnosed with PTSD. For example, Table TABREF27 shows a self-identified tweet posts. To search veterans, we mostly visit to different twitter accounts of veterans organizations such as \"MA Women Veterans @WomenVeterans\", \"Illinois Veterans @ILVetsAffairs\", \"Veterans Benefits @VAVetBenefits\" etc. We define an inclusion criteria as follows: one twitter user will be part of this study if he/she describes himself/herself as a veteran in the introduction and have at least 25 tweets in last week. After choosing the initial twitter users, we search for self-identified PTSD sufferers who claim to be diagnosed with PTSD in their twitter posts. We find 685 matching tweets which are manually reviewed to determine if they indicate a genuine statement of a diagnosis for PTSD. Next, we select the username that authored each of these tweets and retrieve last week's tweets via the Twitter API. We then filtered out users with less than 25 tweets and those whose tweets were not at least 75% in English (measured using an automated language ID system.) This filtering left us with 305 users as positive examples. We repeated this process for a group of randomly selected users. We randomly selected 3,000 twitter users who are veterans as per their introduction and have at least 25 tweets in last one week. After filtering (as above) in total 2,423 users remain, whose tweets are used as negative examples developing a 2,728 user's entire weeks' twitter posts where 305 users are self-claimed PTSD sufferers. We distributed Dryhootch chosen surveys among 1,200 users (305 users are self claimed PTSD sufferers and rest of them are randomly chosen from previous 2,423 users) and received 210 successful responses. Among these responses, 92 users were diagnosed as PTSD by any of the three surveys and rest of the 118 users are diagnosed with NO PTSD. Among the clinically diagnosed PTSD sufferers, 17 of them were not self-identified before. However, 7 of the self-identified PTSD sufferers are assessed with no PTSD by PTSD assessment tools. The response rates of PTSD and NO PTSD users are 27% and 12%. In summary, we have collected one week of tweets from 2,728 veterans where 305 users claimed to have diagnosed with PTSD. After distributing Dryhootch surveys, we have a dataset of 210 veteran twitter users among them 92 users are assessed with PTSD and 118 users are diagnosed with no PTSD using clinically validated surveys. The severity of the PTSD are estimated as Non-existent, light, moderate and high PTSD based on how many surveys support the existence of PTSD among the participants according to dryhootch manual BIBREF18, BIBREF19.",
"There are many clinically validated PTSD assessment tools that are being used both to detect the prevalence of PTSD and its intensity among sufferers. Among all of the tools, the most popular and well accepted one is Domain-Specific Risk-Taking (DOSPERT) Scale BIBREF15. This is a psychometric scale that assesses risk taking in five content domains: financial decisions (separately for investing versus gambling), health/safety, recreational, ethical, and social decisions. Respondents rate the likelihood that they would engage in domain-specific risky activities (Part I). An optional Part II assesses respondents' perceptions of the magnitude of the risks and expected benefits of the activities judged in Part I. There are more scales that are used in risky behavior analysis of individual's daily activities such as, The Berlin Social Support Scales (BSSS) BIBREF16 and Values In Action Scale (VIAS) BIBREF17. Dryhootch America BIBREF18, BIBREF19, a veteran peer support community organization, chooses 5, 6 and 5 questions respectively from the above mentioned survey systems to assess the PTSD among war veterans and consider rest of them as irrelevant to PTSD. The details of dryhootch chosen survey scale are stated in Table TABREF13. Table!TABREF14 shows a sample DOSPERT scale demographic chosen by dryhootch. The threshold (in Table TABREF13) is used to calculate the risky behavior limits. For example, if one individual's weekly DOSPERT score goes over 28, he is in critical situation in terms of risk taking symptoms of PTSD. Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )"
],
"highlighted_evidence": [
"We distributed Dryhootch chosen surveys among 1,200 users (305 users are self claimed PTSD sufferers and rest of them are randomly chosen from previous 2,423 users) and received 210 successful responses. ",
"Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )"
]
}
],
"annotation_id": [
"6185d05f806ff3e054ec5bb7fd773679b7fbb6d9"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
} | {
"caption": [
"Fig. 1. Overview of our framework",
"Fig. 2. WordStat dictionary sample",
"TABLE I DRYHOOTCH CHOSEN PTSD ASSESSMENT SURVEYS (D: DOSPERT, B: BSSS AND V: VIAS) DEMOGRAPHICS",
"TABLE II SAMPLE DRYHOOTCH CHOSEN QUESTIONS FROM DOSPERT",
"Fig. 3. Each 210 users’ average tweets per month",
"Fig. 4. Category Details",
"Fig. 5. S-score table details",
"Fig. 6. Comparisons between Coppersmith et. al. and our method",
"TABLE V LAXARY MODEL BASED CLASSIFICATION DETAILS",
"Fig. 7. Percentages of Training dataset and their PTSD detection accuracy results comparisons. Rest of the dataset has been used for testing",
"Fig. 9. Percentages of Training dataset and their Accuracies for each Survey Tool. Rest of the dataset has been used for testing",
"Fig. 8. Percentages of Training dataset and their Mean Squared Error (MSE) of PTSD Intensity. Rest of the dataset has been used for testing",
"Fig. 10. Weekly PTSD detection accuracy change comparisons with baseline model"
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"3-TableI-1.png",
"3-TableII-1.png",
"4-Figure3-1.png",
"5-Figure4-1.png",
"5-Figure5-1.png",
"6-Figure6-1.png",
"6-TableV-1.png",
"7-Figure7-1.png",
"7-Figure9-1.png",
"7-Figure8-1.png",
"7-Figure10-1.png"
]
} |
2003.12218 | Comprehensive Named Entity Recognition on CORD-19 with Distant or Weak Supervision | We created this CORD-19-NER dataset with comprehensive named entity recognition (NER) on the COVID-19 Open Research Dataset Challenge (CORD-19) corpus (2020- 03-13). This CORD-19-NER dataset covers 74 fine-grained named entity types. It is automatically generated by combining the annotation results from four sources: (1) pre-trained NER model on 18 general entity types from Spacy, (2) pre-trained NER model on 18 biomedical entity types from SciSpacy, (3) knowledge base (KB)-guided NER model on 127 biomedical entity types with our distantly-supervised NER method, and (4) seed-guided NER model on 8 new entity types (specifically related to the COVID-19 studies) with our weakly-supervised NER method. We hope this dataset can help the text mining community build downstream applications. We also hope this dataset can bring insights for the COVID- 19 studies, both on the biomedical side and on the social side. | {
"section_name": [
"Introduction",
"CORD-19-NER Dataset ::: Corpus",
"CORD-19-NER Dataset ::: NER Methods",
"Results ::: NER Annotation Results",
"Results ::: Top-Frequent Entity Summarization",
"Conclusion",
"Acknowledgment"
],
"paragraphs": [
[
"Coronavirus disease 2019 (COVID-19) is an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The disease was first identified in 2019 in Wuhan, Central China, and has since spread globally, resulting in the 2019–2020 coronavirus pandemic. On March 16th, 2020, researchers and leaders from the Allen Institute for AI, Chan Zuckerberg Initiative (CZI), Georgetown University’s Center for Security and Emerging Technology (CSET), Microsoft, and the National Library of Medicine (NLM) at the National Institutes of Health released the COVID-19 Open Research Dataset (CORD-19) of scholarly literature about COVID-19, SARS-CoV-2, and the coronavirus group.",
"Named entity recognition (NER) is a fundamental step in text mining system development to facilitate the COVID-19 studies. There is critical need for NER methods that can quickly adapt to all the COVID-19 related new types without much human effort for training data annotation. We created this CORD-19-NER dataset with comprehensive named entity annotation on the CORD-19 corpus (2020-03-13). This dataset covers 75 fine-grained named entity types. CORD-19-NER is automatically generated by combining the annotation results from four sources. In the following sections, we introduce the details of CORD-19-NER dataset construction. We also show some NER annotation results in this dataset."
],
[
"The corpus is generated from the 29,500 documents in the CORD-19 corpus (2020-03-13). We first merge all the meta-data (all_sources_metadata_2020-03-13.csv) with their corresponding full-text papers. Then we create a tokenized corpus (CORD-19-corpus.json) for further NER annotations.",
"Corpus Tokenization. The raw corpus is a combination of the “title\", “abstract\" and “full-text\" from the CORD-19 corpus. We first conduct automatic phrase mining on the raw corpus using AutoPhrase BIBREF0. Then we do the second round of tokenization with Spacy on the phrase-replaced corpus. We have observed that keeping the AutoPhrase results will significantly improve the distantly- and weakly-supervised NER performance.",
"Key Items. The tokenized corpus includes the following items:",
"doc_id: the line number (0-29499) in “all_sources_metadata_2020-03-13.csv\" in the CORD-19 corpus (2020-03-13).",
"sents: [sent_id, sent_tokens], tokenized sentences and words as described above.",
"source: CZI (1236 records), PMC (27337), bioRxiv (566) and medRxiv (361).",
"doi: populated for all BioRxiv/MedRxiv paper records and most of the other records (26357 non null).",
"pmcid: populated for all PMC paper records (27337 non null).",
"pubmed_id: populated for some of the records.",
"Other keys: publish_time, authors and journal.",
"The tokenized corpus (CORD-19-corpus.json) with the file schema and detailed descriptions can be found in our CORD-19-NER dataset."
],
[
"CORD-19-NER annotation is a combination from four sources with different NER methods:",
"Pre-trained NER on 18 general entity types from Spacy using the model “en_core_web_sm\".",
"Pre-trained NER on 18 biomedical entity types from SciSpacy using the model “en_ner_bionlp13cg_md\".",
"Knowledge base (KB)-guided NER on 127 biomedical entity types with our distantly-supervised NER methods BIBREF1, BIBREF2. We do not require any human annotated training data for the NER model training. Instead, We rely on UMLS as the input KB for distant supervision.",
"Seed-guided NER on 9 new entity types (specifically related to the COVID-19 studies) with our weakly-supervised NER method. We only require several (10-20) human-input seed entities for each new type. Then we expand the seed entity sets with CatE BIBREF3 and apply our distant NER method for the new entity type recognition.",
"The 9 new entity types with examples of their input seed are as follows:",
"Coronavirus: COVID-19, SARS, MERS, etc.",
"Viral Protein: Hemagglutinin, GP120, etc.",
"Livestock: cattle, sheep, pig, etc.",
"Wildlife: bats, wild animals, wild birds, etc",
"Evolution: genetic drift, natural selection, mutation rate, etc",
"Physical Science: atomic charge, Amber force fields, Van der Waals interactions, etc.",
"Substrate: blood, sputum, urine, etc.",
"Material: copper, stainless steel, plastic, etc.",
"Immune Response: adaptive immune response, cell mediated immunity, innate immunity, etc.",
"We merged all the entity types from the four sources and reorganized them into one entity type hierarchy. Specifically, we align all the types from SciSpacy to UMLS. We also merge some fine-grained UMLS entity types to their more coarse-grained types based on the corpus count. Then we get a final entity type hierarchy with 75 fine-grained entity types used in our annotations. The entity type hierarchy (CORD-19-types.xlsx) can be found in our CORD-19-NER dataset.",
"Then we conduct named entity annotation with the four NER methods on the 75 fine-grained entity types. After we get the NER annotation results with the four different methods, we merge the results into one file. The conflicts are resolved by giving priority to different entity types annotated by different methods according to their annotation quality. The final entity annotation results (CORD-19-ner.json) with the file schema and detailed descriptions can be found in our CORD-19-NER dataset."
],
[
"In Figure FIGREF28, we show some examples of the annotation results in CORD-19-NER. We can see that our distantly- or weakly supervised methods achieve high quality recognizing the new entity types, requiring only several seed examples as the input. For example, we recognized “SARS-CoV-2\" as the “CORONAVIRUS\" type, “bat\" and “pangolins\" as the “WILDLIFE\" type and “Van der Waals forces\" as the “PHYSICAL_SCIENCE\" type. This NER annotation results help downstream text mining tasks in discovering the origin and the physical nature of the virus. Our NER methods are domain-independent that can be applied to corpus in different domains. In addition, we show another example of NER annotation on New York Times with our system in Figure FIGREF29.",
"In Figure FIGREF30, we show the comparison of our annotation results with existing NER/BioNER systems. In Figure FIGREF30, we can see that only our method can identify “SARS-CoV-2\" as a coronavirus. In Figure FIGREF30, we can see that our method can identify many more entities such as “pylogenetic\" as a evolution term and “bat\" as a wildlife. In Figure FIGREF30, we can also see that our method can identify many more entities such as “racism\" as a social behavior. In summary, our distantly- and weakly-supervised NER methods are reliable for high-quality entity recognition without requiring human effort for training data annotation."
],
[
"In Table TABREF34, we show some examples of the most frequent entities in the annotated corpus. Specifically, we show the entity types including both our new types and some UMLS types that have not been manually annotated before. We find our annotated entities very informative for the COVID-19 studies. For example, the most frequent entities for the type “SIGN_OR_SYMPTOM behavior\" includes “cough\" and “respiratory symptoms\" that are the most common symptoms for COVID-19 . The most frequent entities for the type “INDIVIDUAL_BEHAVIOR\" includes “hand hygiene\", “disclosures\" and “absenteeism\", which indicates that people focus more on hand cleaning for the COVID-19 issue. Also, the most frequent entities for the type “MACHINE_ACTIVITY\" includes “machine learning\", “data processing\" and “automation\", which indicates that people focus more on the automated methods that can process massive data for the COVID-19 studies. This type also includes “telecommunication\" as the top results, which is quite reasonable under the current COVID-19 situation. More examples can be found in our dataset."
],
[
"In the future, we will further improve the CORD-19-NER dataset quality. We will also build text mining systems based on the CORD-19-NER dataset with richer functionalities. We hope this dataset can help the text mining community build downstream applications. We also hope this dataset can bring insights for the COVID-19 studies, both on the biomedical side and on the social side."
],
[
"Research was sponsored in part by US DARPA KAIROS Program No. FA8750-19-2-1004 and SocialSim Program No. W911NF-17-C-0099, National Science Foundation IIS 16-18481, IIS 17-04532, and IIS-17-41317, and DTRA HDTRA11810026. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and should not be interpreted as necessarily representing the views, either expressed or implied, of DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright annotation hereon. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies."
]
]
} | {
"question": [
"Did they experiment with the dataset?",
"What is the size of this dataset?",
"Do they list all the named entity types present?"
],
"question_id": [
"ce6201435cc1196ad72b742db92abd709e0f9e8d",
"928828544e38fe26c53d81d1b9c70a9fb1cc3feb",
"4f243056e63a74d1349488983dc1238228ca76a7"
],
"nlp_background": [
"",
"",
""
],
"topic_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"search_query": [
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"In Figure FIGREF28, we show some examples of the annotation results in CORD-19-NER. We can see that our distantly- or weakly supervised methods achieve high quality recognizing the new entity types, requiring only several seed examples as the input. For example, we recognized “SARS-CoV-2\" as the “CORONAVIRUS\" type, “bat\" and “pangolins\" as the “WILDLIFE\" type and “Van der Waals forces\" as the “PHYSICAL_SCIENCE\" type. This NER annotation results help downstream text mining tasks in discovering the origin and the physical nature of the virus. Our NER methods are domain-independent that can be applied to corpus in different domains. In addition, we show another example of NER annotation on New York Times with our system in Figure FIGREF29.",
"In Figure FIGREF30, we show the comparison of our annotation results with existing NER/BioNER systems. In Figure FIGREF30, we can see that only our method can identify “SARS-CoV-2\" as a coronavirus. In Figure FIGREF30, we can see that our method can identify many more entities such as “pylogenetic\" as a evolution term and “bat\" as a wildlife. In Figure FIGREF30, we can also see that our method can identify many more entities such as “racism\" as a social behavior. In summary, our distantly- and weakly-supervised NER methods are reliable for high-quality entity recognition without requiring human effort for training data annotation."
],
"highlighted_evidence": [
"In Figure FIGREF28, we show some examples of the annotation results in CORD-19-NER. We can see that our distantly- or weakly supervised methods achieve high quality recognizing the new entity types, requiring only several seed examples as the input. For example, we recognized “SARS-CoV-2\" as the “CORONAVIRUS\" type, “bat\" and “pangolins\" as the “WILDLIFE\" type and “Van der Waals forces\" as the “PHYSICAL_SCIENCE\" type. This NER annotation results help downstream text mining tasks in discovering the origin and the physical nature of the virus. Our NER methods are domain-independent that can be applied to corpus in different domains. In addition, we show another example of NER annotation on New York Times with our system in Figure FIGREF29.\n\nIn Figure FIGREF30, we show the comparison of our annotation results with existing NER/BioNER systems. In Figure FIGREF30, we can see that only our method can identify “SARS-CoV-2\" as a coronavirus. In Figure FIGREF30, we can see that our method can identify many more entities such as “pylogenetic\" as a evolution term and “bat\" as a wildlife. In Figure FIGREF30, we can also see that our method can identify many more entities such as “racism\" as a social behavior. In summary, our distantly- and weakly-supervised NER methods are reliable for high-quality entity recognition without requiring human effort for training data annotation."
]
}
],
"annotation_id": [
"2d5e1221e7cd30341b51ddb988b8659b48b7ac2b"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"29,500 documents"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Named entity recognition (NER) is a fundamental step in text mining system development to facilitate the COVID-19 studies. There is critical need for NER methods that can quickly adapt to all the COVID-19 related new types without much human effort for training data annotation. We created this CORD-19-NER dataset with comprehensive named entity annotation on the CORD-19 corpus (2020-03-13). This dataset covers 75 fine-grained named entity types. CORD-19-NER is automatically generated by combining the annotation results from four sources. In the following sections, we introduce the details of CORD-19-NER dataset construction. We also show some NER annotation results in this dataset.",
"The corpus is generated from the 29,500 documents in the CORD-19 corpus (2020-03-13). We first merge all the meta-data (all_sources_metadata_2020-03-13.csv) with their corresponding full-text papers. Then we create a tokenized corpus (CORD-19-corpus.json) for further NER annotations."
],
"highlighted_evidence": [
"We created this CORD-19-NER dataset with comprehensive named entity annotation on the CORD-19 corpus (2020-03-13). ",
"The corpus is generated from the 29,500 documents in the CORD-19 corpus (2020-03-13). "
]
},
{
"unanswerable": false,
"extractive_spans": [
"29,500 documents in the CORD-19 corpus (2020-03-13)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The corpus is generated from the 29,500 documents in the CORD-19 corpus (2020-03-13). We first merge all the meta-data (all_sources_metadata_2020-03-13.csv) with their corresponding full-text papers. Then we create a tokenized corpus (CORD-19-corpus.json) for further NER annotations."
],
"highlighted_evidence": [
"The corpus is generated from the 29,500 documents in the CORD-19 corpus (2020-03-13). We first merge all the meta-data (all_sources_metadata_2020-03-13.csv) with their corresponding full-text papers. Then we create a tokenized corpus (CORD-19-corpus.json) for further NER annotations.\n\n"
]
}
],
"annotation_id": [
"1466a1bd3601c1b1cdedab1edb1bca2334809e3d",
"cd982553050caaa6fd8dabefe8b9697b05f5cf94"
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"FLOAT SELECTED: Table 2: Examples of the most frequent entities annotated in CORD-NER."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Examples of the most frequent entities annotated in CORD-NER."
]
}
],
"annotation_id": [
"bd64f676b7b1d47ad86c5c897acfe759c2259269"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
]
} | {
"caption": [
"Table 1: Performance comparison on three major biomedical entity types in COVID-19 corpus.",
"Figure 1: Examples of the annotation results with CORD-NER system.",
"Figure 2: Annotation result comparison with other NER methods.",
"Table 2: Examples of the most frequent entities annotated in CORD-NER."
],
"file": [
"2-Table1-1.png",
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Table2-1.png"
]
} |
1904.09678 | UniSent: Universal Adaptable Sentiment Lexica for 1000+ Languages | In this paper, we introduce UniSent a universal sentiment lexica for 1000 languages created using an English sentiment lexicon and a massively parallel corpus in the Bible domain. To the best of our knowledge, UniSent is the largest sentiment resource to date in terms of number of covered languages, including many low resource languages. To create UniSent, we propose Adapted Sentiment Pivot, a novel method that combines annotation projection, vocabulary expansion, and unsupervised domain adaptation. We evaluate the quality of UniSent for Macedonian, Czech, German, Spanish, and French and show that its quality is comparable to manually or semi-manually created sentiment resources. With the publication of this paper, we release UniSent lexica as well as Adapted Sentiment Pivot related codes. method. | {
"section_name": [
"Introduction",
"Method",
"Experimental Setup",
"Results",
"Conclusion"
],
"paragraphs": [
[
"Sentiment classification is an important task which requires either word level or document level sentiment annotations. Such resources are available for at most 136 languages BIBREF0 , preventing accurate sentiment classification in a low resource setup. Recent research efforts on cross-lingual transfer learning enable to train models in high resource languages and transfer this information into other, low resource languages using minimal bilingual supervision BIBREF1 , BIBREF2 , BIBREF3 . Besides that, little effort has been spent on the creation of sentiment lexica for low resource languages (e.g., BIBREF0 , BIBREF4 , BIBREF5 ). We create and release Unisent, the first massively cross-lingual sentiment lexicon in more than 1000 languages. An extensive evaluation across several languages shows that the quality of Unisent is close to manually created resources. Our method is inspired by BIBREF6 with a novel combination of vocabulary expansion and domain adaptation using embedding spaces. Similar to our work, BIBREF7 also use massively parallel corpora to project POS tags and dependency relations across languages. However, their approach is based on assignment of the most probable label according to the alignment model from the source to the target language and does not include any vocabulary expansion or domain adaptation and do not use the embedding graphs."
],
[
"Our method, Adapted Sentiment Pivot requires a sentiment lexicon in one language (e.g. English) as well as a massively parallel corpus. Following steps are performed on this input."
],
[
"Our goal is to evaluate the quality of UniSent against several manually created sentiment lexica in different domains to ensure its quality for the low resource languages. We do this in several steps.",
"As the gold standard sentiment lexica, we chose manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15 . These lexica contain general domain words (as opposed to Twitter or Bible). As gold standard for twitter domain we use emoticon dataset and perform emoticon sentiment prediction BIBREF16 , BIBREF17 .",
"We use the (manually created) English sentiment lexicon (WKWSCI) in BIBREF18 as a resource to be projected over languages. For the projection step (Section SECREF1 ) we use the massively parallel Bible corpus in BIBREF8 . We then propagate the projected sentiment polarities to all words in the Wikipedia corpus. We chose Wikipedia here because its domain is closest to the manually annotated sentiment lexica we use to evaluate UniSent. In the adaptation step, we compute the shift between the vocabularies in the Bible and Wikipedia corpora. To show that our adaptation method also works well on domains like Twitter, we propose a second evaluation in which we use Adapted Sentiment Pivot to predict the sentiment of emoticons in Twitter.",
"To create our test sets, we first split UniSent and our gold standard lexica as illustrated in Figure FIGREF11 . We then form our training and test sets as follows:",
"(i) UniSent-Lexicon: we use words in UniSent for the sentiment learning in the target domain; for this purpose, we use words INLINEFORM0 .",
"(ii) Baseline-Lexicon: we use words in the gold standard lexicon for the sentiment learning in the target domain; for this purpose we use words INLINEFORM0 .",
"(iii) Evaluation-Lexicon: we randomly exclude a set of words the baseline-lexicon INLINEFORM0 . In selection of the sampling size we make sure that INLINEFORM1 and INLINEFORM2 would contain a comparable number of words.",
""
],
[
"In Table TABREF13 we compare the quality of UniSent with the Baseline-Lexicon as well as with the gold standard lexicon for general domain data. The results show that (i) UniSent clearly outperforms the baseline for all languages (ii) the quality of UniSent is close to manually annotated data (iii) the domain adaptation method brings small improvements for morphologically poor languages. The modest gains could be because our drift weighting method (Section SECREF3 ) mainly models a sense shift between words which is not always equivalent to a polarity shift.",
"In Table TABREF14 we compare the quality of UniSent with the gold standard emoticon lexicon in the Twitter domain. The results show that (i) UniSent clearly outperforms the baseline and (ii) our domain adaptation technique brings small improvements for French and Spanish."
],
[
"Using our novel Adapted Sentiment Pivot method, we created UniSent, a sentiment lexicon covering over 1000 (including many low-resource) languages in several domains. The only necessary resources to create UniSent are a sentiment lexicon in any language and a massively parallel corpus that can be small and domain specific. Our evaluation showed that the quality of UniSent is closed to manually annotated resources.",
" "
]
]
} | {
"question": [
"how is quality measured?",
"how many languages exactly is the sentiment lexica for?",
"what sentiment sources do they compare with?"
],
"question_id": [
"8f87215f4709ee1eb9ddcc7900c6c054c970160b",
"b04098f7507efdffcbabd600391ef32318da28b3",
"8fc14714eb83817341ada708b9a0b6b4c6ab5023"
],
"nlp_background": [
"",
"",
""
],
"topic_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"search_query": [
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Accuracy and the macro-F1 (averaged F1 over positive and negative classes) are used as a measure of quality.",
"evidence": [
"FLOAT SELECTED: Table 1: Comparison of manually created lexicon performance with UniSent in Czech, German, French, Macedonians, and Spanish. We report accuracy and the macro-F1 (averaged F1 over positive and negative classes). The baseline is constantly considering the majority label. The last two columns indicate the performance of UniSent after drift weighting."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Comparison of manually created lexicon performance with UniSent in Czech, German, French, Macedonians, and Spanish. We report accuracy and the macro-F1 (averaged F1 over positive and negative classes). The baseline is constantly considering the majority label. The last two columns indicate the performance of UniSent after drift weighting."
]
},
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"97009bed24107de806232d7cf069f51053d7ba5e",
"e38ed05ec140abd97006a8fa7af9a7b4930247df"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"d1204f71bd3c78a11b133016f54de78e8eaecf6e"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"As the gold standard sentiment lexica, we chose manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15 . These lexica contain general domain words (as opposed to Twitter or Bible). As gold standard for twitter domain we use emoticon dataset and perform emoticon sentiment prediction BIBREF16 , BIBREF17 ."
],
"highlighted_evidence": [
"As the gold standard sentiment lexica, we chose manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15 ."
]
}
],
"annotation_id": [
"17db53c0c6f13fe1d43eee276a9554677f007eef"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Figure 1: Neighbors of word ’sensual’ in Spanish, in bible embedding graph (a) and twitter embedding graph (b). Our unsupervised drift weighting method found this word in Spanish to be the most changing word from bible context to the twitter context. Looking more closely at the neighbors, the word sensual in the biblical context has been associated with a negative sentiment of sins. However, in the twitter domain, it has a positive sentiment. This example shows how our unsupervised method can improve the quality of sentiment lexicon.",
"Figure 2: Data split used in the experimental setup of UniSent evaluation: Set (C) is the intersection of the target embedding space words (Wikipedia/Emoticon) and the UniSent lexicon as well as the manually created lexicon. Set (A) is the intersection of the target embedding space words and the UniSent lexicon, excluding set (C). Set (B) is the intersection of the target embedding space words and the manually created lexicon, excluding set (C).",
"Table 1: Comparison of manually created lexicon performance with UniSent in Czech, German, French, Macedonians, and Spanish. We report accuracy and the macro-F1 (averaged F1 over positive and negative classes). The baseline is constantly considering the majority label. The last two columns indicate the performance of UniSent after drift weighting.",
"Table 2: Comparison of domain adapted and vanilla UniSent for Emoticon sentiment prediction using monlingual twitter embeddings in German, Italian, French, and Spanish."
],
"file": [
"3-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"4-Table2-1.png"
]
} |
2003.06651 | Word Sense Disambiguation for 158 Languages using Word Embeddings Only | Disambiguation of word senses in context is easy for humans, but is a major challenge for automatic approaches. Sophisticated supervised and knowledge-based models were developed to solve this task. However, (i) the inherent Zipfian distribution of supervised training instances for a given word and/or (ii) the quality of linguistic knowledge representations motivate the development of completely unsupervised and knowledge-free approaches to word sense disambiguation (WSD). They are particularly useful for under-resourced languages which do not have any resources for building either supervised and/or knowledge-based models. In this paper, we present a method that takes as input a standard pre-trained word embedding model and induces a fully-fledged word sense inventory, which can be used for disambiguation in context. We use this method to induce a collection of sense inventories for 158 languages on the basis of the original pre-trained fastText word embeddings by Grave et al. (2018), enabling WSD in these languages. Models and system are available online. | {
"section_name": [
"",
" ::: ",
" ::: ::: ",
"Introduction",
"Related Work",
"Algorithm for Word Sense Induction",
"Algorithm for Word Sense Induction ::: SenseGram: A Baseline Graph-based Word Sense Induction Algorithm",
"Algorithm for Word Sense Induction ::: egvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm ::: Induction of Sense Inventories",
"Algorithm for Word Sense Induction ::: egvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm ::: Labelling of Induced Senses",
"Algorithm for Word Sense Induction ::: egvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm ::: Word Sense Disambiguation",
"System Design",
"System Design ::: Construction of Sense Inventories",
"System Design ::: Word Sense Disambiguation System",
"Evaluation",
"Evaluation ::: Lexical Similarity and Relatedness ::: Experimental Setup",
"Evaluation ::: Lexical Similarity and Relatedness ::: Discussion of Results",
"Evaluation ::: Word Sense Disambiguation",
"Evaluation ::: Word Sense Disambiguation ::: Experimental Setup",
"Evaluation ::: Word Sense Disambiguation ::: Discussion of Results",
"Evaluation ::: Analysis",
"Conclusions and Future Work",
"Acknowledgements"
],
"paragraphs": [
[
"1.1em"
],
[
"1.1.1em"
],
[
"1.1.1.1em",
"ru=russian",
"",
"$^1$Skolkovo Institute of Science and Technology, Moscow, Russia",
"[email protected]",
"$^2$Ural Federal University, Yekaterinburg, Russia",
"$^3$Universität Hamburg, Hamburg, Germany",
"$^4$Universität Mannheim, Mannheim, Germany",
"$^5$University of Oslo, Oslo, Norway",
"$^6$Higher School of Economics, Moscow, Russia",
"Disambiguation of word senses in context is easy for humans, but is a major challenge for automatic approaches. Sophisticated supervised and knowledge-based models were developed to solve this task. However, (i) the inherent Zipfian distribution of supervised training instances for a given word and/or (ii) the quality of linguistic knowledge representations motivate the development of completely unsupervised and knowledge-free approaches to word sense disambiguation (WSD). They are particularly useful for under-resourced languages which do not have any resources for building either supervised and/or knowledge-based models. In this paper, we present a method that takes as input a standard pre-trained word embedding model and induces a fully-fledged word sense inventory, which can be used for disambiguation in context. We use this method to induce a collection of sense inventories for 158 languages on the basis of the original pre-trained fastText word embeddings by Grave:18, enabling WSD in these languages. Models and system are available online.",
"word sense induction, word sense disambiguation, word embeddings, sense embeddings, graph clustering"
],
[
"There are many polysemous words in virtually any language. If not treated as such, they can hamper the performance of all semantic NLP tasks BIBREF0. Therefore, the task of resolving the polysemy and choosing the most appropriate meaning of a word in context has been an important NLP task for a long time. It is usually referred to as Word Sense Disambiguation (WSD) and aims at assigning meaning to a word in context.",
"The majority of approaches to WSD are based on the use of knowledge bases, taxonomies, and other external manually built resources BIBREF1, BIBREF2. However, different senses of a polysemous word occur in very diverse contexts and can potentially be discriminated with their help. The fact that semantically related words occur in similar contexts, and diverse words do not share common contexts, is known as distributional hypothesis and underlies the technique of constructing word embeddings from unlabelled texts. The same intuition can be used to discriminate between different senses of individual words. There exist methods of training word embeddings that can detect polysemous words and assign them different vectors depending on their contexts BIBREF3, BIBREF4. Unfortunately, many wide-spread word embedding models, such as GloVe BIBREF5, word2vec BIBREF6, fastText BIBREF7, do not handle polysemous words. Words in these models are represented with single vectors, which were constructed from diverse sets of contexts corresponding to different senses. In such cases, their disambiguation needs knowledge-rich approaches.",
"We tackle this problem by suggesting a method of post-hoc unsupervised WSD. It does not require any external knowledge and can separate different senses of a polysemous word using only the information encoded in pre-trained word embeddings. We construct a semantic similarity graph for words and partition it into densely connected subgraphs. This partition allows for separating different senses of polysemous words. Thus, the only language resource we need is a large unlabelled text corpus used to train embeddings. This makes our method applicable to under-resourced languages. Moreover, while other methods of unsupervised WSD need to train embeddings from scratch, we perform retrofitting of sense vectors based on existing word embeddings.",
"We create a massively multilingual application for on-the-fly word sense disambiguation. When receiving a text, the system identifies its language and performs disambiguation of all the polysemous words in it based on pre-extracted word sense inventories. The system works for 158 languages, for which pre-trained fastText embeddings available BIBREF8. The created inventories are based on these embeddings. To the best of our knowledge, our system is the only WSD system for the majority of the presented languages. Although it does not match the state of the art for resource-rich languages, it is fully unsupervised and can be used for virtually any language.",
"The contributions of our work are the following:",
"[noitemsep]",
"We release word sense inventories associated with fastText embeddings for 158 languages.",
"We release a system that allows on-the-fly word sense disambiguation for 158 languages.",
"We present egvi (Ego-Graph Vector Induction), a new algorithm of unsupervised word sense induction, which creates sense inventories based on pre-trained word vectors."
],
[
"There are two main scenarios for WSD: the supervised approach that leverages training corpora explicitly labelled for word sense, and the knowledge-based approach that derives sense representation from lexical resources, such as WordNet BIBREF9. In the supervised case WSD can be treated as a classification problem. Knowledge-based approaches construct sense embeddings, i.e. embeddings that separate various word senses.",
"SupWSD BIBREF10 is a state-of-the-art system for supervised WSD. It makes use of linear classifiers and a number of features such as POS tags, surrounding words, local collocations, word embeddings, and syntactic relations. GlossBERT model BIBREF11, which is another implementation of supervised WSD, achieves a significant improvement by leveraging gloss information. This model benefits from sentence-pair classification approach, introduced by Devlin:19 in their BERT contextualized embedding model. The input to the model consists of a context (a sentence which contains an ambiguous word) and a gloss (sense definition) from WordNet. The context-gloss pair is concatenated through a special token ([SEP]) and classified as positive or negative.",
"On the other hand, sense embeddings are an alternative to traditional word vector models such as word2vec, fastText or GloVe, which represent monosemous words well but fail for ambiguous words. Sense embeddings represent individual senses of polysemous words as separate vectors. They can be linked to an explicit inventory BIBREF12 or induce a sense inventory from unlabelled data BIBREF13. LSTMEmbed BIBREF13 aims at learning sense embeddings linked to BabelNet BIBREF14, at the same time handling word ordering, and using pre-trained embeddings as an objective. Although it was tested only on English, the approach can be easily adapted to other languages present in BabelNet. However, manually labelled datasets as well as knowledge bases exist only for a small number of well-resourced languages. Thus, to disambiguate polysemous words in other languages one has to resort to fully unsupervised techniques.",
"The task of Word Sense Induction (WSI) can be seen as an unsupervised version of WSD. WSI aims at clustering word senses and does not require to map each cluster to a predefined sense. Instead of that, word sense inventories are induced automatically from the clusters, treating each cluster as a single sense of a word. WSI approaches fall into three main groups: context clustering, word ego-network clustering and synonyms (or substitute) clustering.",
"Context clustering approaches consist in creating vectors which characterise words' contexts and clustering these vectors. Here, the definition of context may vary from window-based context to latent topic-alike context. Afterwards, the resulting clusters are either used as senses directly BIBREF15, or employed further to learn sense embeddings via Chinese Restaurant Process algorithm BIBREF16, AdaGram, a Bayesian extension of the Skip-Gram model BIBREF17, AutoSense, an extension of the LDA topic model BIBREF18, and other techniques.",
"Word ego-network clustering is applied to semantic graphs. The nodes of a semantic graph are words, and edges between them denote semantic relatedness which is usually evaluated with cosine similarity of the corresponding embeddings BIBREF19 or by PMI-like measures BIBREF20. Word senses are induced via graph clustering algorithms, such as Chinese Whispers BIBREF21 or MaxMax BIBREF22. The technique suggested in our work belongs to this class of methods and is an extension of the method presented by Pelevina:16.",
"Synonyms and substitute clustering approaches create vectors which represent synonyms or substitutes of polysemous words. Such vectors are created using synonymy dictionaries BIBREF23 or context-dependent substitutes obtained from a language model BIBREF24. Analogously to previously described techniques, word senses are induced by clustering these vectors."
],
[
"The majority of word vector models do not discriminate between multiple senses of individual words. However, a polysemous word can be identified via manual analysis of its nearest neighbours—they reflect different senses of the word. Table TABREF7 shows manually sense-labelled most similar terms to the word Ruby according to the pre-trained fastText model BIBREF8. As it was suggested early by Widdows:02, the distributional properties of a word can be used to construct a graph of words that are semantically related to it, and if a word is polysemous, such graph can easily be partitioned into a number of densely connected subgraphs corresponding to different senses of this word. Our algorithm is based on the same principle."
],
[
"SenseGram is the method proposed by Pelevina:16 that separates nearest neighbours to induce word senses and constructs sense embeddings for each sense. It starts by constructing an ego-graph (semantic graph centred at a particular word) of the word and its nearest neighbours. The edges between the words denote their semantic relatedness, e.g. the two nodes are joined with an edge if cosine similarity of the corresponding embeddings is higher than a pre-defined threshold. The resulting graph can be clustered into subgraphs which correspond to senses of the word.",
"The sense vectors are then constructed by averaging embeddings of words in each resulting cluster. In order to use these sense vectors for word sense disambiguation in text, the authors compute the probabilities of sense vectors of a word given its context or the similarity of the sense vectors to the context."
],
[
"One of the downsides of the described above algorithm is noise in the generated graph, namely, unrelated words and wrong connections. They hamper the separation of the graph. Another weak point is the imbalance in the nearest neighbour list, when a large part of it is attributed to the most frequent sense, not sufficiently representing the other senses. This can lead to construction of incorrect sense vectors.",
"We suggest a more advanced procedure of graph construction that uses the interpretability of vector addition and subtraction operations in word embedding space BIBREF6 while the previous algorithm only relies on the list of nearest neighbours in word embedding space. The key innovation of our algorithm is the use of vector subtraction to find pairs of most dissimilar graph nodes and construct the graph only from the nodes included in such “anti-edges”. Thus, our algorithm is based on graph-based word sense induction, but it also relies on vector-based operations between word embeddings to perform filtering of graph nodes. Analogously to the work of Pelevina:16, we construct a semantic relatedness graph from a list of nearest neighbours, but we filter this list using the following procedure:",
"Extract a list $\\mathcal {N}$ = {$w_{1}$, $w_{2}$, ..., $w_{N}$} of $N$ nearest neighbours for the target (ego) word vector $w$.",
"Compute a list $\\Delta $ = {$\\delta _{1}$, $\\delta _{2}$, ..., $\\delta _{N}$} for each $w_{i}$ in $\\mathcal {N}$, where $\\delta _{i}~=~w-w_{i}$. The vectors in $\\delta $ contain the components of sense of $w$ which are not related to the corresponding nearest neighbours from $\\mathcal {N}$.",
"Compute a list $\\overline{\\mathcal {N}}$ = {$\\overline{w_{1}}$, $\\overline{w_{2}}$, ..., $\\overline{w_{N}}$}, such that $\\overline{w_{i}}$ is in the top nearest neighbours of $\\delta _{i}$ in the embedding space. In other words, $\\overline{w_{i}}$ is a word which is the most similar to the target (ego) word $w$ and least similar to its neighbour $w_{i}$. We refer to $\\overline{w_{i}}$ as an anti-pair of $w_{i}$. The set of $N$ nearest neighbours and their anti-pairs form a set of anti-edges i.e. pairs of most dissimilar nodes – those which should not be connected: $\\overline{E} = \\lbrace (w_{1},\\overline{w_{1}}), (w_{2},\\overline{w_{2}}), ..., (w_{N},\\overline{w_{N}})\\rbrace $.",
"To clarify this, consider the target (ego) word $w = \\textit {python}$, its top similar term $w_1 = \\textit {Java}$ and the resulting anti-pair $\\overline{w_i} = \\textit {snake}$ which is the top related term of $\\delta _1 = w - w_1$. Together they form an anti-edge $(w_i,\\overline{w_i})=(\\textit {Java}, \\textit {snake})$ composed of a pair of semantically dissimilar terms.",
"Construct $V$, the set of vertices of our semantic graph $G=(V,E)$ from the list of anti-edges $\\overline{E}$, with the following recurrent procedure: $V = V \\cup \\lbrace w_{i}, \\overline{w_{i}}: w_{i} \\in \\mathcal {N}, \\overline{w_{i}} \\in \\mathcal {N}\\rbrace $, i.e. we add a word from the list of nearest neighbours and its anti-pair only if both of them are nearest neighbours of the original word $w$. We do not add $w$'s nearest neighbours if their anti-pairs do not belong to $\\mathcal {N}$. Thus, we add only words which can help discriminating between different senses of $w$.",
"Construct the set of edges $E$ as follows. For each $w_{i}~\\in ~\\mathcal {N}$ we extract a set of its $K$ nearest neighbours $\\mathcal {N}^{\\prime }_{i} = \\lbrace u_{1}, u_{2}, ..., u_{K}\\rbrace $ and define $E = \\lbrace (w_{i}, u_{j}): w_{i}~\\in ~V, u_j~\\in ~V, u_{j}~\\in ~\\mathcal {N}^{\\prime }_{i}, u_{j}~\\ne ~\\overline{w_{i}}\\rbrace $. In other words, we remove edges between a word $w_{i}$ and its nearest neighbour $u_j$ if $u_j$ is also its anti-pair. According to our hypothesis, $w_{i}$ and $\\overline{w_{i}}$ belong to different senses of $w$, so they should not be connected (i.e. we never add anti-edges into $E$). Therefore, we consider any connection between them as noise and remove it.",
"Note that $N$ (the number of nearest neighbours for the target word $w$) and $K$ (the number of nearest neighbours of $w_{ci}$) do not have to match. The difference between these parameters is the following. $N$ defines how many words will be considered for the construction of ego-graph. On the other hand, $K$ defines the degree of relatedness between words in the ego-graph — if $K = 50$, then we will connect vertices $w$ and $u$ with an edge only if $u$ is in the list of 50 nearest neighbours of $w$. Increasing $K$ increases the graph connectivity and leads to lower granularity of senses.",
"According to our hypothesis, nearest neighbours of $w$ are grouped into clusters in the vector space, and each of the clusters corresponds to a sense of $w$. The described vertices selection procedure allows picking the most representative members of these clusters which are better at discriminating between the clusters. In addition to that, it helps dealing with the cases when one of the clusters is over-represented in the nearest neighbour list. In this case, many elements of such a cluster are not added to $V$ because their anti-pairs fall outside the nearest neighbour list. This also improves the quality of clustering.",
"After the graph construction, the clustering is performed using the Chinese Whispers algorithm BIBREF21. This is a bottom-up clustering procedure that does not require to pre-define the number of clusters, so it can correctly process polysemous words with varying numbers of senses as well as unambiguous words.",
"Figure FIGREF17 shows an example of the resulting pruned graph of for the word Ruby for $N = 50$ nearest neighbours in terms of the fastText cosine similarity. In contrast to the baseline method by BIBREF19 where all 50 terms are clustered, in the method presented in this section we sparsify the graph by removing 13 nodes which were not in the set of the “anti-edges” i.e. pairs of most dissimilar terms out of these 50 neighbours. Examples of anti-edges i.e. pairs of most dissimilar terms for this graph include: (Haskell, Sapphire), (Garnet, Rails), (Opal, Rubyist), (Hazel, RubyOnRails), and (Coffeescript, Opal)."
],
[
"We label each word cluster representing a sense to make them and the WSD results interpretable by humans. Prior systems used hypernyms to label the clusters BIBREF25, BIBREF26, e.g. “animal” in the “python (animal)”. However, neither hypernyms nor rules for their automatic extraction are available for all 158 languages. Therefore, we use a simpler method to select a keyword which would help to interpret each cluster. For each graph node $v \\in V$ we count the number of anti-edges it belongs to: $count(v) = | \\lbrace (w_i,\\overline{w_i}) : (w_i,\\overline{w_i}) \\in \\overline{E} \\wedge (v = w_i \\vee v = \\overline{w_i}) \\rbrace |$. A graph clustering yields a partition of $V$ into $n$ clusters: $V~=~\\lbrace V_1, V_2, ..., V_n\\rbrace $. For each cluster $V_i$ we define a keyword $w^{key}_i$ as the word with the largest number of anti-edges $count(\\cdot )$ among words in this cluster."
],
[
"We use keywords defined above to obtain vector representations of senses. In particular, we simply use word embedding of the keyword $w^{key}_i$ as a sense representation $\\mathbf {s}_i$ of the target word $w$ to avoid explicit computation of sense embeddings like in BIBREF19. Given a sentence $\\lbrace w_1, w_2, ..., w_{j}, w, w_{j+1}, ..., w_n\\rbrace $ represented as a matrix of word vectors, we define the context of the target word $w$ as $\\textbf {c}_w = \\dfrac{\\sum _{j=1}^{n} w_j}{n}$. Then, we define the most appropriate sense $\\hat{s}$ as the sense with the highest cosine similarity to the embedding of the word's context:"
],
[
"We release a system for on-the-fly WSD for 158 languages. Given textual input, it identifies polysemous words and retrieves senses that are the most appropriate in the context."
],
[
"To build word sense inventories (sense vectors) for 158 languages, we utilised GPU-accelerated routines for search of similar vectors implemented in Faiss library BIBREF27. The search of nearest neighbours takes substantial time, therefore, acceleration with GPUs helps to significantly reduce the word sense construction time. To further speed up the process, we keep all intermediate results in memory, which results in substantial RAM consumption of up to 200 Gb.",
"The construction of word senses for all of the 158 languages takes a lot of computational resources and imposes high requirements to the hardware. For calculations, we use in parallel 10–20 nodes of the Zhores cluster BIBREF28 empowered with Nvidia Tesla V100 graphic cards. For each of the languages, we construct inventories based on 50, 100, and 200 neighbours for 100,000 most frequent words. The vocabulary was limited in order to make the computation time feasible. The construction of inventories for one language takes up to 10 hours, with $6.5$ hours on average. Building the inventories for all languages took more than 1,000 hours of GPU-accelerated computations. We release the constructed sense inventories for all the available languages. They contain all the necessary information for using them in the proposed WSD system or in other downstream tasks."
],
[
"The first text pre-processing step is language identification, for which we use the fastText language identification models by Bojanowski:17. Then the input is tokenised. For languages which use Latin, Cyrillic, Hebrew, or Greek scripts, we employ the Europarl tokeniser. For Chinese, we use the Stanford Word Segmenter BIBREF29. For Japanese, we use Mecab BIBREF30. We tokenise Vietnamese with UETsegmenter BIBREF31. All other languages are processed with the ICU tokeniser, as implemented in the PyICU project. After the tokenisation, the system analyses all the input words with pre-extracted sense inventories and defines the most appropriate sense for polysemous words.",
"Figure FIGREF19 shows the interface of the system. It has a textual input form. The automatically identified language of text is shown above. A click on any of the words displays a prompt (shown in black) with the most appropriate sense of a word in the specified context and the confidence score. In the given example, the word Jaguar is correctly identified as a car brand. This system is based on the system by Ustalov:18, extending it with a back-end for multiple languages, language detection, and sense browsing capabilities."
],
[
"We first evaluate our converted embedding models on multi-language lexical similarity and relatedness tasks, as a sanity check, to make sure the word sense induction process did not hurt the general performance of the embeddings. Then, we test the sense embeddings on WSD task."
],
[
"We use the SemR-11 datasets BIBREF32, which contain word pairs with manually assigned similarity scores from 0 (words are not related) to 10 (words are fully interchangeable) for 12 languages: English (en), Arabic (ar), German (de), Spanish (es), Farsi (fa), French (fr), Italian (it), Dutch (nl), Portuguese (pt), Russian (ru), Swedish (sv), Chinese (zh). The task is to assign relatedness scores to these pairs so that the ranking of the pairs by this score is close to the ranking defined by the oracle score. The performance is measured with Pearson correlation of the rankings. Since one word can have several different senses in our setup, we follow Remus:18 and define the relatedness score for a pair of words as the maximum cosine similarity between any of their sense vectors.",
"We extract the sense inventories from fastText embedding vectors. We set $N=K$ for all our experiments, i.e. the number of vertices in the graph and the maximum number of vertices' nearest neighbours match. We conduct experiments with $N=K$ set to 50, 100, and 200. For each cluster $V_i$ we create a sense vector $s_i$ by averaging vectors that belong to this cluster. We rely on the methodology of BIBREF33 shifting the generated sense vector to the direction of the original word vector: $s_i~=~\\lambda ~w + (1-\\lambda )~\\dfrac{1}{n}~\\sum _{u~\\in ~V_i} cos(w, u)\\cdot u, $ where, $\\lambda \\in [0, 1]$, $w$ is the embedding of the original word, $cos(w, u)$ is the cosine similarity between $w$ and $u$, and $n=|V_i|$. By introducing the linear combination of $w$ and $u~\\in ~V_i$ we enforce the similarity of sense vectors to the original word important for this task. In addition to that, we weight $u$ by their similarity to the original word, so that more similar neighbours contribute more to the sense vector. The shifting parameter $\\lambda $ is set to $0.5$, following Remus:18.",
"A fastText model is able to generate a vector for each word even if it is not represented in the vocabulary, due to the use of subword information. However, our system cannot assemble sense vectors for out-of-vocabulary words, for such words it returns their original fastText vector. Still, the coverage of the benchmark datasets by our vocabulary is at least 85% and approaches 100% for some languages, so we do not have to resort to this back-off strategy very often.",
"We use the original fastText vectors as a baseline. In this case, we compute the relatedness scores of the two words as a cosine similarity of their vectors."
],
[
"We compute the relatedness scores for all benchmark datasets using our sense vectors and compare them to cosine similarity scores of original fastText vectors. The results vary for different languages. Figure FIGREF28 shows the change in Pearson correlation score when switching from the baseline fastText embeddings to our sense vectors. The new vectors significantly improve the relatedness detection for German, Farsi, Russian, and Chinese, whereas for Italian, Dutch, and Swedish the score slightly falls behind the baseline. For other languages, the performance of sense vectors is on par with regular fastText."
],
[
"The purpose of our sense vectors is disambiguation of polysemous words. Therefore, we test the inventories constructed with egvi on the Task 13 of SemEval-2013 — Word Sense Induction BIBREF34. The task is to identify the different senses of a target word in context in a fully unsupervised manner."
],
[
"The dataset consists of a set of polysemous words: 20 nouns, 20 verbs, and 10 adjectives and specifies 20 to 100 contexts per word, with the total of 4,664 contexts, drawn from the Open American National Corpus. Given a set of contexts of a polysemous word, the participants of the competition had to divide them into clusters by sense of the word. The contexts are manually labelled with WordNet senses of the target words, the gold standard clustering is generated from this labelling.",
"The task allows two setups: graded WSI where participants can submit multiple senses per word and provide the probability of each sense in a particular context, and non-graded WSI where a model determines a single sense for a word in context. In our experiments we performed non-graded WSI. We considered the most suitable sense as the one with the highest cosine similarity with embeddings of the context, as described in Section SECREF9.",
"The performance of WSI models is measured with three metrics that require mapping of sense inventories (Jaccard Index, Kendall's $\\tau $, and WNDCG) and two cluster comparison metrics (Fuzzy NMI and Fuzzy B-Cubed)."
],
[
"We compare our model with the models that participated in the task, the baseline ego-graph clustering model by Pelevina:16, and AdaGram BIBREF17, a method that learns sense embeddings based on a Bayesian extension of the Skip-gram model. Besides that, we provide the scores of the simple baselines originally used in the task: assigning one sense to all words, assigning the most frequent sense to all words, and considering each context as expressing a different sense. The evaluation of our model was performed using the open source context-eval tool.",
"Table TABREF31 shows the performance of these models on the SemEval dataset. Due to space constraints, we only report the scores of the best-performing SemEval participants, please refer to jurgens-klapaftis-2013-semeval for the full results. The performance of AdaGram and SenseGram models is reported according to Pelevina:16.",
"The table shows that the performance of egvi is similar to state-of-the-art word sense disambiguation and word sense induction models. In particular, we can see that it outperforms SenseGram on the majority of metrics. We should note that this comparison is not fully rigorous, because SenseGram induces sense inventories from word2vec as opposed to fastText vectors used in our work."
],
[
"In order to see how the separation of word contexts that we perform corresponds to actual senses of polysemous words, we visualise ego-graphs produced by our method. Figure FIGREF17 shows the nearest neighbours clustering for the word Ruby, which divides the graph into five senses: Ruby-related programming tools, e.g. RubyOnRails (orange cluster), female names, e.g. Josie (magenta cluster), gems, e.g. Sapphire (yellow cluster), programming languages in general, e.g. Haskell (red cluster). Besides, this is typical for fastText embeddings featuring sub-string similarity, one can observe a cluster of different spelling of the word Ruby in green.",
"Analogously, the word python (see Figure FIGREF35) is divided into the senses of animals, e.g. crocodile (yellow cluster), programming languages, e.g. perl5 (magenta cluster), and conference, e.g. pycon (red cluster).",
"In addition, we show a qualitative analysis of senses of mouse and apple. Table TABREF38 shows nearest neighbours of the original words separated into clusters (labels for clusters were assigned manually). These inventories demonstrate clear separation of different senses, although it can be too fine-grained. For example, the first and the second cluster for mouse both refer to computer mouse, but the first one addresses the different types of computer mice, and the second one is used in the context of mouse actions. Similarly, we see that iphone and macbook are separated into two clusters. Interestingly, fastText handles typos, code-switching, and emojis by correctly associating all non-standard variants to the word they refer, and our method is able to cluster them appropriately. Both inventories were produced with $K=200$, which ensures stronger connectivity of graph. However, we see that this setting still produces too many clusters. We computed the average numbers of clusters produced by our model with $K=200$ for words from the word relatedness datasets and compared these numbers with the number of senses in WordNet for English and RuWordNet BIBREF35 for Russian (see Table TABREF37). We can see that the number of senses extracted by our method is consistently higher than the real number of senses.",
"We also compute the average number of senses per word for all the languages and different values of $K$ (see Figure FIGREF36). The average across languages does not change much as we increase $K$. However, for larger $K$ the average exceed the median value, indicating that more languages have lower number of senses per word. At the same time, while at smaller $K$ the maximum average number of senses per word does not exceed 6, larger values of $K$ produce outliers, e.g. English with $12.5$ senses.",
"Notably, there are no languages with an average number of senses less than 2, while numbers on English and Russian WordNets are considerably lower. This confirms that our method systematically over-generates senses. The presence of outliers shows that this effect cannot be eliminated by further increasing $K$, because the $i$-th nearest neighbour of a word for $i>200$ can be only remotely related to this word, even if the word is rare. Thus, our sense clustering algorithm needs a method of merging spurious senses."
],
[
"We present egvi, a new algorithm for word sense induction based on graph clustering that is fully unsupervised and relies on graph operations between word vectors. We apply this algorithm to a large collection of pre-trained fastText word embeddings, releasing sense inventories for 158 languages. These inventories contain all the necessary information for constructing sense vectors and using them in downstream tasks. The sense vectors for polysemous words can be directly retrofitted with the pre-trained word embeddings and do not need any external resources. As one application of these multilingual sense inventories, we present a multilingual word sense disambiguation system that performs unsupervised and knowledge-free WSD for 158 languages without the use of any dictionary or sense-labelled corpus.",
"The evaluation of quality of the produced sense inventories is performed on multilingual word similarity benchmarks, showing that our sense vectors improve the scores compared to non-disambiguated word embeddings. Therefore, our system in its present state can improve WSD and downstream tasks for languages where knowledge bases, taxonomies, and annotated corpora are not available and supervised WSD models cannot be trained.",
"A promising direction for future work is combining distributional information from the induced sense inventories with lexical knowledge bases to improve WSD performance. Besides, we encourage the use of the produced word sense inventories in other downstream tasks."
],
[
"We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) foundation under the “JOIN-T 2” and “ACQuA” projects. Ekaterina Artemova was supported by the framework of the HSE University Basic Research Program and Russian Academic Excellence Project “5-100”."
]
]
} | {
"question": [
"Is the method described in this work a clustering-based method?",
"How are the different senses annotated/labeled? ",
"Was any extrinsic evaluation carried out?"
],
"question_id": [
"d94ac550dfdb9e4bbe04392156065c072b9d75e1",
"eeb6e0caa4cf5fdd887e1930e22c816b99306473",
"3c0eaa2e24c1442d988814318de5f25729696ef5"
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"f7c76ad7ff9c8b54e8c397850358fa59258c6672",
"f7c76ad7ff9c8b54e8c397850358fa59258c6672",
"f7c76ad7ff9c8b54e8c397850358fa59258c6672"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"The task of Word Sense Induction (WSI) can be seen as an unsupervised version of WSD. WSI aims at clustering word senses and does not require to map each cluster to a predefined sense. Instead of that, word sense inventories are induced automatically from the clusters, treating each cluster as a single sense of a word. WSI approaches fall into three main groups: context clustering, word ego-network clustering and synonyms (or substitute) clustering.",
"We suggest a more advanced procedure of graph construction that uses the interpretability of vector addition and subtraction operations in word embedding space BIBREF6 while the previous algorithm only relies on the list of nearest neighbours in word embedding space. The key innovation of our algorithm is the use of vector subtraction to find pairs of most dissimilar graph nodes and construct the graph only from the nodes included in such “anti-edges”. Thus, our algorithm is based on graph-based word sense induction, but it also relies on vector-based operations between word embeddings to perform filtering of graph nodes. Analogously to the work of Pelevina:16, we construct a semantic relatedness graph from a list of nearest neighbours, but we filter this list using the following procedure:"
],
"highlighted_evidence": [
"The task of Word Sense Induction (WSI) can be seen as an unsupervised version of WSD. WSI aims at clustering word senses and does not require to map each cluster to a predefined sense. Instead of that, word sense inventories are induced automatically from the clusters, treating each cluster as a single sense of a word.",
"We suggest a more advanced procedure of graph construction that uses the interpretability of vector addition and subtraction operations in word embedding space BIBREF6 while the previous algorithm only relies on the list of nearest neighbours in word embedding space. The key innovation of our algorithm is the use of vector subtraction to find pairs of most dissimilar graph nodes and construct the graph only from the nodes included in such “anti-edges”."
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"We suggest a more advanced procedure of graph construction that uses the interpretability of vector addition and subtraction operations in word embedding space BIBREF6 while the previous algorithm only relies on the list of nearest neighbours in word embedding space. The key innovation of our algorithm is the use of vector subtraction to find pairs of most dissimilar graph nodes and construct the graph only from the nodes included in such “anti-edges”. Thus, our algorithm is based on graph-based word sense induction, but it also relies on vector-based operations between word embeddings to perform filtering of graph nodes. Analogously to the work of Pelevina:16, we construct a semantic relatedness graph from a list of nearest neighbours, but we filter this list using the following procedure:"
],
"highlighted_evidence": [
"We suggest a more advanced procedure of graph construction that uses the interpretability of vector addition and subtraction operations in word embedding space BIBREF6 while the previous algorithm only relies on the list of nearest neighbours in word embedding space."
]
}
],
"annotation_id": [
"1e94724114314e98f1b554b9e902e5b72f23a5f1",
"77e36cc311b73a573d3fd8b9f3283f9483c9d2b4"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"The contexts are manually labelled with WordNet senses of the target words"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The dataset consists of a set of polysemous words: 20 nouns, 20 verbs, and 10 adjectives and specifies 20 to 100 contexts per word, with the total of 4,664 contexts, drawn from the Open American National Corpus. Given a set of contexts of a polysemous word, the participants of the competition had to divide them into clusters by sense of the word. The contexts are manually labelled with WordNet senses of the target words, the gold standard clustering is generated from this labelling.",
"The task allows two setups: graded WSI where participants can submit multiple senses per word and provide the probability of each sense in a particular context, and non-graded WSI where a model determines a single sense for a word in context. In our experiments we performed non-graded WSI. We considered the most suitable sense as the one with the highest cosine similarity with embeddings of the context, as described in Section SECREF9."
],
"highlighted_evidence": [
"The dataset consists of a set of polysemous words: 20 nouns, 20 verbs, and 10 adjectives and specifies 20 to 100 contexts per word, with the total of 4,664 contexts, drawn from the Open American National Corpus. Given a set of contexts of a polysemous word, the participants of the competition had to divide them into clusters by sense of the word. The contexts are manually labelled with WordNet senses of the target words, the gold standard clustering is generated from this labelling.\n\nThe task allows two setups: graded WSI where participants can submit multiple senses per word and provide the probability of each sense in a particular context, and non-graded WSI where a model determines a single sense for a word in context. In our experiments we performed non-graded WSI. We considered the most suitable sense as the one with the highest cosine similarity with embeddings of the context, as described in Section SECREF9."
]
}
],
"annotation_id": [
"f54fb34f7c718c0916d43f15c8e0d20a590d0bad"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"We first evaluate our converted embedding models on multi-language lexical similarity and relatedness tasks, as a sanity check, to make sure the word sense induction process did not hurt the general performance of the embeddings. Then, we test the sense embeddings on WSD task."
],
"highlighted_evidence": [
"We first evaluate our converted embedding models on multi-language lexical similarity and relatedness tasks, as a sanity check, to make sure the word sense induction process did not hurt the general performance of the embeddings. Then, we test the sense embeddings on WSD task."
]
}
],
"annotation_id": [
"1876981b02466f860fe36cc2c964b52887b55a4c"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Table 1: Top nearest neighbours of the fastText vector of the word Ruby are clustered according to various senses of this word: programming language, gem, first name, color, but also its spelling variations (typeset in black color).",
"Figure 1: The graph of nearest neighbours of the word Ruby can be separated according several senses: programming languages, female names, gems, as well as a cluster of different spellings of the word Ruby.",
"Figure 2: Interface of our WSD module with examples for the English language. Given a sentence, it identifies polysemous words and retrieves the most appropriate sense (labelled by the centroid word of a corresponding cluster).",
"Figure 3: Absolute improvement of Pearson correlation scores of our embeddings compared to fastText. This is the averaged difference of the scores for all word similarity benchmarks.",
"Figure 4: Ego-graph for a polysemous word python which is clustered into senses snake (yellow), programming language (magenta), and conference (red). Node size denotes word importance with the largest node in the cluster being used as a keyword to interpret an induced word sense.",
"Table 2: WSD performance on the SemEval-2013 Task 13 dataset for the English language.",
"Figure 5: Distribution of the number of senses per word in the generated inventories for all 158 languages for the number of neighbours set to: N ∈ {50, 100, 200}, K ∈ {50, 100, 200} with N = K.",
"Table 3: Average number of senses for words from SemR11 dataset in our inventory and in WordNet for English and ruWordNet for Russian. The rightmost column gives the average number of senses in the inventories and WordNets.",
"Table 4: Clustering of senses for words mouse and apple produced by our method. Cluster labels in this table were assigned manually for illustrative purposes. For on-the-fly disambiguation we use centroid words in clusters as sense labels (shown here in bold)."
],
"file": [
"2-Table1-1.png",
"4-Figure1-1.png",
"5-Figure2-1.png",
"6-Figure3-1.png",
"6-Figure4-1.png",
"7-Table2-1.png",
"7-Figure5-1.png",
"7-Table3-1.png",
"8-Table4-1.png"
]
} |
1910.04269 | Spoken Language Identification using ConvNets | Language Identification (LI) is an important first step in several speech processing systems. With a growing number of voice-based assistants, speech LI has emerged as a widely researched field. To approach the problem of identifying languages, we can either adopt an implicit approach where only the speech for a language is present or an explicit one where text is available with its corresponding transcript. This paper focuses on an implicit approach due to the absence of transcriptive data. This paper benchmarks existing models and proposes a new attention based model for language identification which uses log-Mel spectrogram images as input. We also present the effectiveness of raw waveforms as features to neural network models for LI tasks. For training and evaluation of models, we classified six languages (English, French, German, Spanish, Russian and Italian) with an accuracy of 95.4% and four languages (English, French, German, Spanish) with an accuracy of 96.3% obtained from the VoxForge dataset. This approach can further be scaled to incorporate more languages. | {
"section_name": [
"Introduction",
"Related Work",
"Proposed Method ::: Motivations",
"Proposed Method ::: Description of Features",
"Proposed Method ::: Model Description",
"Proposed Method ::: Model Details: 1D ConvNet",
"Proposed Method ::: Model Details: 1D ConvNet ::: Hyperparameter Optimization:",
"Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU",
"Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU ::: ",
"Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU ::: Hyperparameter Optimization:",
"Proposed Method ::: Model details: 2D-ConvNet",
"Proposed Method ::: Dataset",
"Results and Discussion",
"Results and Discussion ::: Misclassification",
"Results and Discussion ::: Future Scope",
"Conclusion"
],
"paragraphs": [
[
"Language Identification (LI) is a problem which involves classifying the language being spoken by a speaker. LI systems can be used in call centers to route international calls to an operator who is fluent in that identified language BIBREF0. In speech-based assistants, LI acts as the first step which chooses the corresponding grammar from a list of available languages for its further semantic analysis BIBREF1. It can also be used in multi-lingual voice-controlled information retrieval systems, for example, Apple Siri and Amazon Alexa.",
"Over the years, studies have utilized many prosodic and acoustic features to construct machine learning models for LI systems BIBREF2. Every language is composed of phonemes, which are distinct unit of sounds in that language, such as b of black and g of green. Several prosodic and acoustic features are based on phonemes, which become the underlying features on whom the performance of the statistical model depends BIBREF3, BIBREF4. If two languages have many overlapping phonemes, then identifying them becomes a challenging task for a classifier. For example, the word cat in English, kat in Dutch, katze in German have different consonants but when used in a speech they all would sound quite similar.",
"Due to such drawbacks several studies have switched over to using Deep Neural Networks (DNNs) to harness their novel auto-extraction techniques BIBREF1, BIBREF5. This work follows an implicit approach for identifying six languages with overlapping phonemes on the VoxForge BIBREF6 dataset and achieves 95.4% overall accuracy.",
"In previous studies BIBREF1, BIBREF7, BIBREF5, authors use log-Mel spectrum of a raw audio as inputs to their models. One of our contributions is to enhance the performance of this approach by utilising recent techniques like Mixup augmentation of inputs and exploring the effectiveness of Attention mechanism in enhancing performance of neural network. As log-Mel spectrum needs to be computed for each raw audio input and processing time for generating log-Mel spectrum increases linearly with length of audio, this acts as a bottleneck for these models. Hence, we propose the use of raw audio waveforms as inputs to deep neural network which boosts performance by avoiding additional overhead of computing log-Mel spectrum for each audio. Our 1D-ConvNet architecture auto-extracts and classifies features from this raw audio input.",
"The structure of the work is as follows. In Section 2 we discuss about the previous related studies in this field. The model architecture for both the raw waveforms and log-Mel spectrogram images is discussed in Section 3 along with the a discussion on hyperparameter space exploration. In Section 4 we present the experimental results. Finally, in Section 5 we discuss the conclusions drawn from the experiment and future work."
],
[
"Extraction of language dependent features like prosody and phonemes was a popular approach to classify spoken languages BIBREF8, BIBREF9, BIBREF10. Following their success in speaker verification systems, i-vectors have also been used as features in various classification networks. These approaches required significant domain knowledge BIBREF11, BIBREF9. Nowadays most of the attempts on spoken language identification rely on neural networks for meaningful feature extraction and classification BIBREF12, BIBREF13.",
"Revay et al. BIBREF5 used the ResNet50 BIBREF14 architecture for classifying languages by generating the log-Mel spectra of each raw audio. The model uses a cyclic learning rate where learning rate increases and then decreases linearly. Maximum learning rate for a cycle is set by finding the optimal learning rate using fastai BIBREF15 library. The model classified six languages – English, French, Spanish, Russian, Italian and German – and achieving an accuracy of 89.0%.",
"Gazeau et al. BIBREF16 in his research showed how Neural Networks, Support Vector Machine and Hidden Markov Model (HMM) can be used to identify French, English, Spanish and German. Dataset was prepared using voice samples from Youtube News BIBREF17and VoxForge BIBREF6 datasets. Hidden Markov models convert speech into a sequence of vectors, was used to capture temporal features in speech. HMMs trained on VoxForge BIBREF6 dataset performed best in comparison to other models proposed by him on same VoxForge dataset. They reported an accuracy of 70.0%.",
"Bartz et al. BIBREF1 proposed two different hybrid Convolutional Recurrent Neural Networks for language identification. They proposed a new architecture for extracting spatial features from log-Mel spectra of raw audio using CNNs and then using RNNs for capturing temporal features to identify the language. This model achieved an accuracy of 91.0% on Youtube News Dataset BIBREF17. In their second architecture they used the Inception-v3 BIBREF18 architecture to extract spatial features which were then used as input for bi-directional LSTMs to predict the language accurately. This model achieved an accuracy of 96.0% on four languages which were English, German, French and Spanish. They also trained their CNN model (obtained after removing RNN from CRNN model) and the Inception-v3 on their dataset. However they were not able to achieve better results achieving and reported 90% and 95% accuracies, respectively.",
"Kumar et al. BIBREF0 used Mel-frequency cepstral coefficients (MFCC), Perceptual linear prediction coefficients (PLP), Bark Frequency Cepstral Coefficients (BFCC) and Revised Perceptual Linear Prediction Coefficients (RPLP) as features for language identification. BFCC and RPLP are hybrid features derived using MFCC and PLP. They used two different models based on Vector Quantization (VQ) with Dynamic Time Warping (DTW) and Gaussian Mixture Model (GMM) for classification. These classification models were trained with different features. The authors were able to show that these models worked better with hybrid features (BFCC and RPLP) as compared to conventional features (MFCC and PLP). GMM combined with RPLP features gave the most promising results and achieved an accuracy of 88.8% on ten languages. They designed their own dataset comprising of ten languages being Dutch, English, French, German, Italian, Russian, Spanish, Hindi, Telegu, and Bengali.",
"Montavon BIBREF7 generated Mel spectrogram as features for a time-delay neural network (TDNN). This network had two-dimensional convolutional layers for feature extraction. An elaborate analysis of how deep architectures outperform their shallow counterparts is presented in this reseacrch. The difficulties in classifying perceptually similar languages like German and English were also put forward in this work. It is mentioned that the proposed approach is less robust to new speakers present in the test dataset. This method was able to achieve an accuracy of 91.2% on dataset comprising of 3 languages – English, French and German.",
"In Table TABREF1, we summarize the quantitative results of the above previous studies. It includes the model basis, feature description, languages classified and the used dataset along with accuracy obtained. The table also lists the overall results of our proposed models (at the top). The languages used by various authors along with their acronyms are English (En), Spanish (Es), French (Fr), German (De), Russian (Ru), Italian (It), Bengali (Ben), Hindi (Hi) and Telegu (Tel)."
],
[
"Several state-of-the-art results on various audio classification tasks have been obtained by using log-Mel spectrograms of raw audio, as features BIBREF19. Convolutional Neural Networks have demonstrated an excellent performance gain in classification of these features BIBREF20, BIBREF21 against other machine learning techniques. It has been shown that using attention layers with ConvNets further enhanced their performance BIBREF22. This motivated us to develop a CNN-based architecture with attention since this approach hasn’t been applied to the task of language identification before.",
"Recently, using raw audio waveform as features to neural networks has become a popular approach in audio classification BIBREF23, BIBREF22. Raw waveforms have several artifacts which are not effectively captured by various conventional feature extraction techniques like Mel Frequency Cepstral Coefficients (MFCC), Constant Q Transform (CQT), Fast Fourier Transform (FFT), etc.",
"Audio files are a sequence of spoken words, hence they have temporal features too.A CNN is better at capturing spatial features only and RNNs are better at capturing temporal features as demonstrated by Bartz et al. BIBREF1 using audio files. Therefore, we combined both of these to make a CRNN model.",
"We propose three types of models to tackle the problem with different approaches, discussed as follows."
],
[
"As an average human's voice is around 300 Hz and according to Nyquist-Shannon sampling theorem all the useful frequencies (0-300 Hz) are preserved with sampling at 8 kHz, therefore, we sampled raw audio files from all six languages at 8 kHz",
"The average length of audio files in this dataset was about 10.4 seconds and standard deviation was 2.3 seconds. For our experiments, the audio length was set to 10 seconds. If the audio files were shorter than 10 second, then the data was repeated and concatenated. If audio files were longer, then the data was truncated."
],
[
"We applied the following design principles to all our models:",
"Every convolutional layer is always followed by an appropriate max pooling layer. This helps in containing the explosion of parameters and keeps the model small and nimble.",
"Convolutional blocks are defined as an individual block with multiple pairs of one convolutional layer and one max pooling layer. Each convolutional block is preceded or succeded by a convolutional layer.",
"Batch Normalization and Rectified linear unit activations were applied after each convolutional layer. Batch Normalization helps speed up convergence during training of a neural network.",
"Model ends with a dense layer which acts the final output layer."
],
[
"As the sampling rate is 8 kHz and audio length is 10 s, hence the input is raw audio to the models with input size of (batch size, 1, 80000). In Table TABREF10, we present a detailed layer-by-layer illustration of the model along with its hyperparameter.",
"-10pt"
],
[
"Tuning hyperparameters is a cumbersome process as the hyperparamter space expands exponentially with the number of parameters, therefore efficient exploration is needed for any feasible study. We used the random search algorithm supported by Hyperopt BIBREF24 library to randomly search for an optimal set of hyperparameters from a given parameter space. In Fig. FIGREF12, various hyperparameters we considered are plotted against the validation accuracy as violin plots. Our observations for each hyperparameter are summarized below:",
"Number of filters in first layer: We observe that having 128 filters gives better results as compared to other filter values of 32 and 64 in the first layer. A higher number of filters in the first layer of network is able to preserve most of the characteristics of input.",
"Kernel Size: We varied the receptive fields of convolutional layers by choosing the kernel size from among the set of {3, 5, 7, 9}. We observe that a kernel size of 9 gives better accuracy at the cost of increased computation time and larger number of parameters. A large kernel size is able to capture longer patterns in its input due to bigger receptive power which results in an improved accuracy.",
"Dropout: Dropout randomly turns-off (sets to 0) various individual nodes during training of the network. In a deep CNN it is important that nodes do not develop a co-dependency amongst each other during training in order to prevent overfitting on training data BIBREF25. Dropout rate of $0.1$ works well for our model. When using a higher dropout rate the network is not able to capture the patterns in training dataset.",
"Batch Size: We chose batch sizes from amongst the set {32, 64, 128}. There is more noise while calculating error in a smaller batch size as compared to a larger one. This tends to have a regularizing effect during training of the network and hence gives better results. Thus, batch size of 32 works best for the model.",
"Layers in Convolutional block 1 and 2: We varied the number of layers in both the convolutional blocks. If the number of layers is low, then the network does not have enough depth to capture patterns in the data whereas having large number of layers leads to overfitting on the data. In our network, two layers in the first block and one layer in the second block give optimal results."
],
[
"Log-Mel spectrogram is the most commonly used method for converting audio into the image domain. The audio data was again sampled at 8 kHz. The input to this model was the log-Mel spectra. We generated log-Mel spectrogram using the LibROSA BIBREF26 library. In Table TABREF16, we present a detailed layer-by-layer illustration of the model along with its hyperparameter."
],
[
"We took some specific design choices for this model, which are as follows:",
"We added residual connections with each convolutional layer. Residual connections in a way makes the model selective of the contributing layers, determines the optimal number of layers required for training and solves the problem of vanishing gradients. Residual connections or skip connections skip training of those layers that do not contribute much in the overall outcome of model.",
"We added spatial attention BIBREF27 networks to help the model in focusing on specific regions or areas in an image. Spatial attention aids learning irrespective of transformations, scaling and rotation done on the input images making the model more robust and helping it to achieve better results.",
"We added Channel Attention networks so as to help the model to find interdependencies among color channels of log-Mel spectra. It adaptively assigns importance to each color channel in a deep convolutional multi-channel network. In our model we apply channel and spatial attention just before feeding the input into bi-directional GRU. This helps the model to focus on selected regions and at the same time find patterns among channels to better determine the language."
],
[
"We used the random search algorithm supported by Hyperopt BIBREF24 library to randomly search for an optimal set of hyperparameters from a given parameter space. In Fig. FIGREF19 ,various hyperparameters we tuned are plotted against the validation accuracy. Our observations for each hyperparameter are summarized below:",
"Filter Size: 64 filters in the first layer of network can preserve most of the characteristics of input, but increasing it to 128 is inefficient as overfitting occurs.",
"Kernel Size: There is a trade-off between kernel size and capturing complex non-linear features. Using a small kernel size will require more layers to capture features whereas using a large kernel size will require less layers. Large kernels capture simple non-linear features whereas using a smaller kernel will help us capture more complex non-linear features. However, with more layers, backpropagation necessitates the need for a large memory. We experimented with large kernel size and gradually increased the layers in order to capture more complex features. The results are not conclusive and thus we chose kernel size of 7 against 3.",
"Dropout: Dropout rate of 0.1 works well for our data. When using a higher dropout rate the network is not able to capture the patterns in training dataset.",
"Batch Size: There is always a trade-off between batch size and getting accurate gradients. Using a large batch size helps the model to get more accurate gradients since the model tries to optimize gradients over a large set of images. We found that using a batch size of 128 helped the model to train faster and get better results than using a batch size less than 128.",
"Number of hidden units in bi-directional GRU: Varying the number of hidden units and layers in GRU helps the model to capture temporal features which can play a significant role in identifying the language correctly. The optimal number of hidden units and layers depends on the complexity of the dataset. Using less number of hidden units may capture less features whereas using large number of hidden units may be computationally expensive. In our case we found that using 1536 hidden units in a single bi-directional GRU layer leads to the best result.",
"Image Size: We experimented with log-Mel spectra images of sizes $64 \\times 64$ and $128 \\times 128$ pixels and found that our model worked best with images of size of $128 \\times 128$ pixels.",
"We also evaluated our model on data with mixup augmentation BIBREF28. It is a data augmentation technique that also acts as a regularization technique and prevents overfitting. Instead of directly taking images from the training dataset as input, mixup takes a linear combination of any two random images and feeds it as input. The following equations were used to prepared a mixed-up dataset:",
"and",
"where $\\alpha \\in [0, 1]$ is a random variable from a $\\beta $-distribution, $I_1$."
],
[
"This model is a similar model to 2D-ConvNet with Attention and bi-directional GRU described in section SECREF13 except that it lacks skip connections, attention layers, bi-directional GRU and the embedding layer incorporated in the previous model."
],
[
"We classified six languages (English, French, German, Spanish, Russian and Italian) from the VoxForge BIBREF6 dataset. VoxForge is an open-source speech corpus which primarily consists of samples recorded and submitted by users using their own microphone. This results in significant variation of speech quality between samples making it more representative of real world scenarios.",
"Our dataset consists of 1,500 samples for each of six languages. Out of 1,500 samples for each language, 1,200 were randomly selected as training dataset for that language and rest 300 as validation dataset using k-fold cross-validation. To sum up, we trained our model on 7,200 samples and validated it on 1800 samples comprising six languages. The results are discussed in next section."
],
[
"This paper discusses two end-to-end approaches which achieve state-of-the-art results in both the image as well as audio domain on the VoxForge dataset BIBREF6. In Table TABREF25, we present all the classification accuracies of the two models of the cases with and without mixup for six and four languages.",
"In the audio domain (using raw audio waveform as input), 1D-ConvNet achieved a mean accuracy of 93.7% with a standard deviation of 0.3% on running k-fold cross validation. In Fig FIGREF27 (a) we present the confusion matrix for the 1D-ConvNet model.",
"In the image domain (obtained by taking log-Mel spectra of raw audio), 2D-ConvNet with 2D attention (channel and spatial attention) and bi-directional GRU achieved a mean accuracy of 95.0% with a standard deviation of 1.2% for six languages. This model performed better when mixup regularization was applied. 2D-ConvNet achieved a mean accuracy of 95.4% with standard deviation of 0.6% on running k-fold cross validation for six languages when mixup was applied. In Fig FIGREF27 (b) we present the confusion matrix for the 2D-ConvNet model. 2D attention models focused on the important features extracted by convolutional layers and bi-directional GRU captured the temporal features."
],
[
"Several of the spoken languages in Europe belong to the Indo-European family. Within this family, the languages are divided into three phyla which are Romance, Germanic and Slavic. Of the 6 languages that we selected Spanish (Es), French (Fr) and Italian (It) belong to the Romance phyla, English and German belong to Germanic phyla and Russian in Slavic phyla. Our model also confuses between languages belonging to the similar phyla which acts as an insanity check since languages in same phyla have many similar pronounced words such as cat in English becomes Katze in German and Ciao in Italian becomes Chao in Spanish.",
"Our model confuses between French (Fr) and Russian (Ru) while these languages belong to different phyla, many words from French were adopted into Russian such as automate (oot-oo-mate) in French becomes ABTOMaT (aff-taa-maat) in Russian which have similar pronunciation.",
""
],
[
"The performance of raw audio waveforms as input features to ConvNet can be further improved by applying silence removal in the audio. Also, there is scope for improvement by augmenting available data through various conventional techniques like pitch shifting, adding random noise and changing speed of audio. These help in making neural networks more robust to variations which might be present in real world scenarios. There can be further exploration of various feature extraction techniques like Constant-Q transform and Fast Fourier Transform and assessment of their impact on Language Identification.",
"There can be further improvements in neural network architectures like concatenating the high level features obtained from 1D-ConvNet and 2D-ConvNet, before performing classification. There can be experiments using deeper networks with skip connections and Inception modules. These are known to have positively impacted the performance of Convolutional Neural Networks."
],
[
"There are two main contributions of this paper in the domain of spoken language identification. Firstly, we presented an extensive analysis of raw audio waveforms as input features to 1D-ConvNet. We experimented with various hyperparameters in our 1D-ConvNet and evaluated their effect on validation accuracy. This method is able to bypass the computational overhead of conventional approaches which depend on generation of spectrograms as a necessary pre-procesing step. We were able to achieve an accauracy of 93.7% using this technique.",
"Next, we discussed the enhancement in performance of 2D-ConvNet using mixup augmentation, which is a recently developed technique to prevent overfitting on test data.This approach achieved an accuracy of 95.4%. We also analysed how attention mechanism and recurrent layers impact the performance of networks. This approach achieved an accuracy of 95.0%."
]
]
} | {
"question": [
"Does the model use both spectrogram images and raw waveforms as features?",
"Is the performance compared against a baseline model?",
"What is the accuracy reported by state-of-the-art methods?"
],
"question_id": [
"dc1fe3359faa2d7daa891c1df33df85558bc461b",
"922f1b740f8b13fdc8371e2a275269a44c86195e",
"b39f2249a1489a2cef74155496511cc5d1b2a73d"
],
"nlp_background": [
"two",
"two",
"two"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"language identification",
"language identification",
"language identification"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"FLOAT SELECTED: Table 4: Results of the two models and all its variations"
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Results of the two models and all its variations"
]
}
],
"annotation_id": [
"32dee5de8cb44c67deef309c16e14e0634a7a95e"
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"In Table TABREF1, we summarize the quantitative results of the above previous studies. It includes the model basis, feature description, languages classified and the used dataset along with accuracy obtained. The table also lists the overall results of our proposed models (at the top). The languages used by various authors along with their acronyms are English (En), Spanish (Es), French (Fr), German (De), Russian (Ru), Italian (It), Bengali (Ben), Hindi (Hi) and Telegu (Tel)."
],
"highlighted_evidence": [
"In Table TABREF1, we summarize the quantitative results of the above previous studies. It includes the model basis, feature description, languages classified and the used dataset along with accuracy obtained. The table also lists the overall results of our proposed models (at the top). The languages used by various authors along with their acronyms are English (En), Spanish (Es), French (Fr), German (De), Russian (Ru), Italian (It), Bengali (Ben), Hindi (Hi) and Telegu (Tel)."
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"1a51115249ab15633d834cd3ea7d986f6cc8d7c1",
"55b711611cb5f52eab6c38051fb155c5c37234ff"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Answer with content missing: (Table 1)\nPrevious state-of-the art on same dataset: ResNet50 89% (6 languages), SVM-HMM 70% (4 languages)",
"evidence": [
"In Table TABREF1, we summarize the quantitative results of the above previous studies. It includes the model basis, feature description, languages classified and the used dataset along with accuracy obtained. The table also lists the overall results of our proposed models (at the top). The languages used by various authors along with their acronyms are English (En), Spanish (Es), French (Fr), German (De), Russian (Ru), Italian (It), Bengali (Ben), Hindi (Hi) and Telegu (Tel)."
],
"highlighted_evidence": [
"In Table TABREF1, we summarize the quantitative results of the above previous studies.",
"In Table TABREF1, we summarize the quantitative results of the above previous studies."
]
}
],
"annotation_id": [
"2405966a3c4bcf65f3b59888f345e2b0cc5ef7b0"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Table 2: Architecture of the 1D-ConvNet model",
"Fig. 1: Effect of hyperparameter variation of the hyperparameter on the classification accuracy for the case of 1D-ConvNet. Orange colored violin plots show the most favored choice of the hyperparameter and blue shows otherwise. One dot represents one sample.",
"Table 3: Architecture of the 2D-ConvNet model",
"Fig. 2: Effect of hyperparameter variation of the six selected hyperparameter on the classification accuracy for the case of 2D-ConvNet. Orange colored violin plots show the most favored choice of the hyperparameter and blue shows otherwise. One dot represents one sample.",
"Table 4: Results of the two models and all its variations",
"Fig. 3: Confusion matrix for classification of six languages with our (a) 1DConvNet and (b) 2D-ConvNet model. Asterisk (*) marks a value less than 0.1%."
],
"file": [
"6-Table2-1.png",
"7-Figure1-1.png",
"8-Table3-1.png",
"9-Figure2-1.png",
"11-Table4-1.png",
"12-Figure3-1.png"
]
} |
1906.00378 | Unsupervised Bilingual Lexicon Induction from Mono-lingual Multimodal Data | Bilingual lexicon induction, translating words from the source language to the target language, is a long-standing natural language processing task. Recent endeavors prove that it is promising to employ images as pivot to learn the lexicon induction without reliance on parallel corpora. However, these vision-based approaches simply associate words with entire images, which are constrained to translate concrete words and require object-centered images. We humans can understand words better when they are within a sentence with context. Therefore, in this paper, we propose to utilize images and their associated captions to address the limitations of previous approaches. We propose a multi-lingual caption model trained with different mono-lingual multimodal data to map words in different languages into joint spaces. Two types of word representation are induced from the multi-lingual caption model: linguistic features and localized visual features. The linguistic feature is learned from the sentence contexts with visual semantic constraints, which is beneficial to learn translation for words that are less visual-relevant. The localized visual feature is attended to the region in the image that correlates to the word, so that it alleviates the image restriction for salient visual representation. The two types of features are complementary for word translation. Experimental results on multiple language pairs demonstrate the effectiveness of our proposed method, which substantially outperforms previous vision-based approaches without using any parallel sentences or supervision of seed word pairs. | {
"section_name": [
"Introduction",
"Related Work",
"Unsupervised Bilingual Lexicon Induction",
"Multi-lingual Image Caption Model",
"Visual-guided Word Representation",
"Word Translation Prediction",
"Datasets",
"Experimental Setup",
"Evaluation of Multi-lingual Image Caption",
"Evaluation of Bilingual Lexicon Induction",
"Generalization to Diverse Language Pairs",
"Conclusion",
" Acknowledgments"
],
"paragraphs": [
[
"The bilingual lexicon induction task aims to automatically build word translation dictionaries across different languages, which is beneficial for various natural language processing tasks such as cross-lingual information retrieval BIBREF0 , multi-lingual sentiment analysis BIBREF1 , machine translation BIBREF2 and so on. Although building bilingual lexicon has achieved success with parallel sentences in resource-rich languages BIBREF2 , the parallel data is insufficient or even unavailable especially for resource-scarce languages and it is expensive to collect. On the contrary, there are abundant multimodal mono-lingual data on the Internet, such as images and their associated tags and descriptions, which motivates researchers to induce bilingual lexicon from these non-parallel data without supervision.",
"There are mainly two types of mono-lingual approaches to build bilingual dictionaries in recent works. The first is purely text-based, which explores the structure similarity between different linguistic space. The most popular approach among them is to linearly map source word embedding into the target word embedding space BIBREF3 , BIBREF4 . The second type utilizes vision as bridge to connect different languages BIBREF5 , BIBREF6 , BIBREF7 . It assumes that words correlating to similar images should share similar semantic meanings. So previous vision-based methods search images with multi-lingual words and translate words according to similarities of visual features extracted from the corresponding images. It has been proved that the visual-grounded word representation improves the semantic quality of the words BIBREF8 .",
"However, previous vision-based methods suffer from two limitations for bilingual lexicon induction. Firstly, the accurate translation performance is confined to concrete visual-relevant words such as nouns and adjectives as shown in Figure SECREF2 . For words without high-quality visual groundings, previous methods would generate poor translations BIBREF7 . Secondly, previous works extract visual features from the whole image to represent words and thus require object-centered images in order to obtain reliable visual groundings. However, common images usually contain multiple objects or scenes, and the word might only be grounded to part of the image, therefore the global visual features will be quite noisy to represent the word.",
"In this paper, we address the two limitations via learning from mono-lingual multimodal data with both sentence and visual context (e.g., image and caption data) to induce bilingual lexicon. Such multimodal data is also easily obtained for different languages on the Internet BIBREF9 . We propose a multi-lingual image caption model trained on multiple mono-lingual image caption data, which is able to induce two types of word representations for different languages in the joint space. The first is the linguistic feature learned from the sentence context with visual semantic constraints, so that it is able to generate more accurate translations for words that are less visual-relevant. The second is the localized visual feature which attends to the local region of the object or scene in the image for the corresponding word, so that the visual representation of words will be more salient than previous global visual features. The two representations are complementary and can be combined to induce better bilingual word translation.",
"We carry out experiments on multiple language pairs including German-English, French-English, and Japanese-English. The experimental results show that the proposed multi-lingual caption model not only achieves better caption performance than independent mono-lingual models for data-scarce languages, but also can induce the two types of features, linguistic and visual features, for different languages in joint spaces. Our proposed method consistently outperforms previous state-of-the-art vision-based bilingual word induction approaches on different languages. The contributions of this paper are as follows:"
],
[
"The early works for bilingual lexicon induction require parallel data in different languages. BIBREF2 systematically investigates various word alignment methods with parallel texts to induce bilingual lexicon. However, the parallel data is scarce or even unavailable for low-resource languages. Therefore, methods with less dependency on the availability of parallel corpora are highly desired.",
"There are mainly two types of mono-lingual approaches for bilingual lexicon induction: text-based and vision-based methods. The text-based methods purely exploit the linguistic information to translate words. The initiative works BIBREF10 , BIBREF11 utilize word co-occurrences in different languages as clue for word alignment. With the improvement in word representation based on deep learning, BIBREF3 finds the structure similarity of the deep-learned word embeddings in different languages, and employs a parallel vocabulary to learn a linear mapping from the source to target word embeddings. BIBREF12 improves the translation performance via adding an orthogonality constraint to the mapping. BIBREF13 further introduces a matching mechanism to induce bilingual lexicon with fewer seeds. However, these models require seed lexicon as the start-point to train the bilingual mapping. Recently, BIBREF4 proposes an adversarial learning approach to learn the joint bilingual embedding space without any seed lexicon.",
"The vision-based methods exploit images to connect different languages, which assume that words corresponding to similar images are semantically alike. BIBREF5 collects images with labeled words in different languages to learn word translation with image as pivot. BIBREF6 improves the visual-based word translation performance via using more powerful visual representations: the CNN-based BIBREF14 features. The above works mainly focus on the translation of nouns and are limited in the number of collected languages. The recent work BIBREF7 constructs the current largest (with respect to the number of language pairs and types of part-of-speech) multimodal word translation dataset, MMID. They show that concrete words are easiest for vision-based translation methods while others are much less accurate. In our work, we alleviate the limitations of previous vision-based methods via exploring images and their captions rather than images with unstructured tags to connect different languages.",
"Image captioning has received more and more research attentions. Most image caption works focus on the English caption generation BIBREF15 , BIBREF16 , while there are limited works considering generating multi-lingual captions. The recent WMT workshop BIBREF17 has proposed a subtask of multi-lingual caption generation, where different strategies such as multi-task captioning and source-to-target translation followed by captioning have been proposed to generate captions in target languages. Our work proposes a multi-lingual image caption model that shares part of the parameters across different languages in order to benefit each other."
],
[
"Our goal is to induce bilingual lexicon without supervision of parallel sentences or seed word pairs, purely based on the mono-lingual image caption data. In the following, we introduce the multi-lingual image caption model whose objectives for bilingual lexicon induction are two folds: 1) explicitly build multi-lingual word embeddings in the joint linguistic space; 2) implicitly extract the localized visual features for each word in the shared visual space. The former encodes linguistic information of words while the latter encodes the visual-grounded information, which are complementary for bilingual lexicon induction."
],
[
"Suppose we have mono-lingual image caption datasets INLINEFORM0 in the source language and INLINEFORM1 in the target language. The images INLINEFORM2 in INLINEFORM3 and INLINEFORM4 do not necessarily overlap, but cover overlapped object or scene classes which is the basic assumption of vision-based methods. For notation simplicity, we omit the superscript INLINEFORM5 for the data sample. Each image caption INLINEFORM6 and INLINEFORM7 is composed of word sequences INLINEFORM8 and INLINEFORM9 respectively, where INLINEFORM10 is the sentence length.",
"The proposed multi-lingual image caption model aims to generate sentences in different languages to describe the image content, which connects the vision and multi-lingual sentences. Figure FIGREF15 illustrates the framework of the caption model, which consists of three parts: the image encoder, word embedding module and language decoder.",
"The image encoder encodes the image into the shared visual space. We apply the Resnet152 BIBREF18 as our encoder INLINEFORM0 , which produces INLINEFORM1 vectors corresponding to different spatial locations in the image: DISPLAYFORM0 ",
"where INLINEFORM0 . The parameter INLINEFORM1 of the encoder is shared for different languages in order to encode all the images in the same visual space.",
"The word embedding module maps the one-hot word representation in each language into low-dimensional distributional embeddings: DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 is the word embedding matrix for the source and target languages respectively. INLINEFORM2 and INLINEFORM3 are the vocabulary size of the two languages.",
"The decoder then generates word step by step conditioning on the encoded image feature and previous generated words. The probability of generating INLINEFORM0 in the source language is as follows: DISPLAYFORM0 ",
"where INLINEFORM0 is the hidden state of the decoder at step INLINEFORM1 , which is functioned by LSTM BIBREF19 : DISPLAYFORM0 ",
"The INLINEFORM0 is the dynamically located contextual image feature to generate word INLINEFORM1 via attention mechanism, which is the weighted sum of INLINEFORM2 computed by DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 is a fully connected neural network. The parameter INLINEFORM1 in the decoder includes all the weights in the LSTM and the attention network INLINEFORM2 .",
"Similarly, INLINEFORM0 is the probability of generating INLINEFORM1 in the target language, which shares INLINEFORM2 with the source language. By sharing the same parameters across different languages in the encoder and decoder, both the visual features and the learned word embeddings for different languages are enforced to project in a joint semantic space. To be noted, the proposed multi-lingual parameter sharing strategy is not constrained to the presented image captioning model, but can be applied in various image captioning models such as show-tell model BIBREF15 and so on.",
"We use maximum likelihood as objective function to train the multi-lingual caption model, which maximizes the log-probability of the ground-truth captions: DISPLAYFORM0 "
],
[
"The proposed multi-lingual caption model can induce similarities of words in different languages from two aspects: the linguistic similarity and the visual similarity. In the following, we discuss the two types of similarity and then construct the source and target word representations.",
"The linguistic similarity is reflected from the learned word embeddings INLINEFORM0 and INLINEFORM1 in the multi-lingual caption model. As shown in previous works BIBREF20 , word embeddings learned from the language contexts can capture syntactic and semantic regularities in the language. However, if the word embeddings of different languages are trained independently, they are not in the same linguistic space and we cannot compute similarities directly. In our multi-lingual caption model, since images in INLINEFORM2 and INLINEFORM3 share the same visual space, the features of sentence INLINEFORM4 and INLINEFORM5 belonging to similar images are bound to be close in the same space with the visual constraints. Meanwhile, the language decoder is also shared, which enforces the word embeddings across languages into the same semantic space in order to generate similar sentence features. Therefore, INLINEFORM6 and INLINEFORM7 not only encode the linguistic information of different languages but also share the embedding space which enables direct cross-lingual similarity comparison. We refer the linguistic features of source and target words INLINEFORM8 and INLINEFORM9 as INLINEFORM10 and INLINEFORM11 respectively.",
"For the visual similarity, the multi-lingual caption model locates the image region to generate each word base on the spatial attention in Eq ( EQREF13 ), which can be used to calculate the localized visual representation of the word. However, since the attention is computed before word generation, the localization performance can be less accurate. It also cannot be generalized to image captioning models without spatial attention. Therefore, inspired by BIBREF21 , where they occlude over regions of the image to observe the change of classification probabilities, we feed different parts of the image to the caption model and investigate the probability changes for each word in the sentence. Algorithm SECREF16 presents the procedure of word localization and the grounded visual feature generation. Please note that such visual-grounding is learned unsupervisedly from the image caption data. Therefore, every word can be represented as a set of grounded visual features (the set size equals to the word occurrence number in the dataset). We refer the localized visual feature set for source word INLINEFORM0 as INLINEFORM1 , for target word INLINEFORM2 as INLINEFORM3 .",
"Generating localized visual features. Encoded image features INLINEFORM0 , sentence INLINEFORM1 . Localized visual features for each word INLINEFORM2 each INLINEFORM3 compute INLINEFORM4 according to Eq ( EQREF10 ) INLINEFORM5 INLINEFORM6 INLINEFORM7 "
],
[
"Since the word representations of the source and target language are in the same space, we could directly compute the similarities across languages. We apply l2-normalization on the word representations and measure with the cosine similarity. For linguistic features, the similarity is measured as: DISPLAYFORM0 ",
"However, there are a set of visual features associated with one word, so the visual similarity measurement between two words is required to take two sets of visual features as input. We aggregate the visual features in a single representation and then compute cosine similarity instead of point-wise similarities among two sets: DISPLAYFORM0 ",
"The reasons for performing aggregation are two folds. Firstly, the number of visual features is proportional to the word occurrence in our approach instead of fixed numbers as in BIBREF6 , BIBREF7 . So the computation cost for frequent words are much higher. Secondly, the aggregation helps to reduce noise, which is especially important for abstract words. The abstract words such as `event' are more visually diverse, but the overall styles of multiple images can reflect its visual semantics.",
"Due to the complementary characteristics of the two features, we combine them to predict the word translation. The translated word for INLINEFORM0 is DISPLAYFORM0 "
],
[
"For image captioning, we utilize the multi30k BIBREF22 , COCO BIBREF23 and STAIR BIBREF24 datasets. The multi30k dataset contains 30k images and annotations under two tasks. In task 1, each image is annotated with one English description which is then translated into German and French. In task 2, the image is independently annotated with 5 descriptions in English and German respectively. For German and English languages, we utilize annotations in task 2. For the French language, we can only employ French descriptions in task 1, so the training size for French is less than the other two languages. The COCO and STAIR datasets contain the same image set but are independently annotated in English and Japanese. Since the images in the wild for different languages might not overlap, we randomly split the image set into two disjoint parts of equal size. The images in each part only contain the mono-lingual captions. We use Moses SMT Toolkit to tokenize sentences and select words occurring more than five times in our vocabulary for each language. Table TABREF21 summarizes the statistics of caption datasets.",
"For bilingual lexicon induction, we use two visual datasets: BERGSMA and MMID. The BERGSMA dataset BIBREF5 consists of 500 German-English word translation pairs. Each word is associated with no more than 20 images. The words in BERGSMA dataset are all nouns. The MMID dataset BIBREF7 covers a larger variety of words and languages, including 9,808 German-English pairs and 9,887 French-English pairs. The source word can be mapped to multiple target words in their dictionary. Each word is associated with no more than 100 retrieved images. Since both these image datasets do not contain Japanese language, we download the Japanese-to-English dictionary online. We select words in each dataset that overlap with our caption vocabulary, which results in 230 German-English pairs in BERGSMA dataset, 1,311 German-English pairs and 1,217 French-English pairs in MMID dataset, and 2,408 Japanese-English pairs."
],
[
"For the multi-lingual caption model, we set the word embedding size and the hidden size of LSTM as 512. Adam algorithm is applied to optimize the model with learning rate of 0.0001 and batch size of 128. The caption model is trained up to 100 epochs and the best model is selected according to caption performance on the validation set.",
"We compare our approach with two baseline vision-based methods proposed in BIBREF6 , BIBREF7 , which measure the similarity of two sets of global visual features for bilingual lexicon induction:",
"CNN-mean: taking the similarity score of the averaged feature of the two image sets.",
"CNN-avgmax: taking the average of the maximum similarity scores of two image sets.",
"We evaluate the word translation performance using MRR (mean-reciprocal rank) as follows: DISPLAYFORM0 ",
"where INLINEFORM0 is the groundtruth translated words for source word INLINEFORM1 , and INLINEFORM2 denotes the rank of groundtruth word INLINEFORM3 in the rank list of translation candidates. We also measure the precision at K (P@K) score, which is the proportion of source words whose groundtruth translations rank in the top K words. We set K as 1, 5, 10 and 20."
],
[
"We first evaluate the captioning performance of the proposed multi-lingual caption model, which serves as the foundation stone for our bilingual lexicon induction method.",
"We compare the proposed multi-lingual caption model with the mono-lingual model, which consists of the same model structure, but is trained separately for each language. Table TABREF22 presents the captioning results on the multi30k dataset, where all the languages are from the Latin family. The multi-lingual caption model achieves comparable performance with mono-lingual model for data sufficient languages such as English and German, and significantly outperforms the mono-lingual model for the data-scarce language French with absolute 3.22 gains on the CIDEr metric. For languages with distinctive grammar structures such as English and Japanese, the multi-lingual model is also on par with the mono-lingual model as shown in Table TABREF29 . To be noted, the multi-lingual model contains about twice less of parameters than the independent mono-lingual models, which is more computation efficient.",
"We visualize the learned visual groundings from the multi-lingual caption model in Figure FIGREF32 . Though there is certain mistakes such as `musicians' in the bottom image, most of the words are grounded well with correct objects or scenes, and thus can obtain more salient visual features."
],
[
"We induce the linguistic features and localized visual features from the multi-lingual caption model for word translation from the source to target languages. Table TABREF30 presents the German-to-English word translation performance of the proposed features. In the BERGSMA dataset, the visual features achieve better translation results than the linguistic features while they are inferior to the linguistic features in the MMID dataset. This is because the vocabulary in BERGSMA dataset mainly consists of nouns, but the parts-of-speech is more diverse in the MMID dataset. The visual features contribute most to translate concrete noun words, while the linguistic features are beneficial to other abstract words. The fusion of the two features performs best for word translation, which demonstrates that the two features are complementary with each other.",
"We also compare our approach with previous state-of-the-art vision-based methods in Table TABREF30 . Since our visual feature is the averaged representation, it is fair to compare with the CNN-mean baseline method where the only difference lies in the feature rather than similarity measurement. The localized features perform substantially better than the global image features which demonstrate the effectiveness of the attention learned from the caption model. The combination of visual and linguistic features also significantly improves the state-of-the-art visual-based CNN-avgmax method with 11.6% and 6.7% absolute gains on P@1 on the BERGSMA and MMID dataset respectively.",
"In Figure FIGREF36 , we present the word translation performance for different POS (part-of-speech) labels. We assign the POS label for words in different languages according to their translations in English. We can see that the previous state-of-the-art vision-based approach contributes mostly to noun words which are most visual-relevant, while generates poor translations for other part-of-speech words. Our approach, however, substantially improves the translation performance for all part-of-speech classes. For concrete words such as nouns and adjectives, the localized visual features produce better representation than previous global visual features; and for other part-of-speech words, the linguistic features, which are learned with sentence context, are effective to complement the visual features. The fusion of the linguistic and localized visual features in our approach leads to significant performance improvement over the state-of-the-art baseline method for all types of POS classes.",
"Some correct and incorrect translation examples for different POS classes are shown in Table TABREF34 . The visual-relevant concrete words are easier to translate such as `phone' and `red'. But our approach still generates reasonable results for abstract words such as `area' and functional words such as `for' due to the fusion of visual and sentence contexts.",
"We also evaluate the influence of different image captioning structures on the bilingual lexicon induction. We compare our attention model (`attn') with the vanilla show-tell model (`mp') BIBREF15 , which applies mean pooling over spatial image features to generate captions and achieves inferior caption performance to the attention model. Table TABREF35 shows the word translation performance of the two caption models. The attention model with better caption performance also induces better linguistic and localized visual features for bilingual lexicon induction. Nevertheless, the show-tell model still outperforms the previous vision-based methods in Table TABREF30 ."
],
[
"Beside German-to-English word translation, we expand our approach to other languages including French and Japanese which is more distant from English.",
"The French-to-English word translation performance is presented in Table TABREF39 . To be noted, the training data of the French captions is five times less than German captions, which makes French-to-English word translation performance less competitive with German-to-English. But similarly, the fusion of linguistic and visual features achieves the best performance, which has boosted the baseline methods with 4.2% relative gains on the MRR metric and 17.4% relative improvements on the P@20 metric.",
"Table TABREF40 shows the Japanese-to-English word translation performance. Since the language structures of Japanese and English are quite different, the linguistic features learned from the multi-lingual caption model are less effective but still can benefit the visual features to improve the translation quality. The results on multiple diverse language pairs further demonstrate the generalization of our approach for different languages."
],
[
"In this paper, we address the problem of bilingual lexicon induction without reliance on parallel corpora. Based on the experience that we humans can understand words better when they are within the context and can learn word translations with external world (e.g. images) as pivot, we propose a new vision-based approach to induce bilingual lexicon with images and their associated sentences. We build a multi-lingual caption model from multiple mono-lingual multimodal data to map words in different languages into joint spaces. Two types of word representation, linguistic features and localized visual features, are induced from the caption model. The two types of features are complementary for word translation. Experimental results on multiple language pairs demonstrate the effectiveness of our proposed method, which leads to significant performance improvement over the state-of-the-art vision-based approaches for all types of part-of-speech. In the future, we will further expand the vision-pivot approaches for zero-resource machine translation without parallel sentences."
],
[
"This work was supported by National Natural Science Foundation of China under Grant No. 61772535, National Key Research and Development Plan under Grant No. 2016YFB1001202 and Research Foundation of Beijing Municipal Science & Technology Commission under Grant No. Z181100008918002."
]
]
} | {
"question": [
"Which vision-based approaches does this approach outperform?",
"What baseline is used for the experimental setup?",
"Which languages are used in the multi-lingual caption model?"
],
"question_id": [
"591231d75ff492160958f8aa1e6bfcbbcd85a776",
"9e805020132d950b54531b1a2620f61552f06114",
"95abda842c4df95b4c5e84ac7d04942f1250b571"
],
"nlp_background": [
"five",
"five",
"five"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"irony",
"irony",
"irony"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"CNN-mean",
"CNN-avgmax"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We compare our approach with two baseline vision-based methods proposed in BIBREF6 , BIBREF7 , which measure the similarity of two sets of global visual features for bilingual lexicon induction:",
"CNN-mean: taking the similarity score of the averaged feature of the two image sets.",
"CNN-avgmax: taking the average of the maximum similarity scores of two image sets."
],
"highlighted_evidence": [
"We compare our approach with two baseline vision-based methods proposed in BIBREF6 , BIBREF7 , which measure the similarity of two sets of global visual features for bilingual lexicon induction:\n\nCNN-mean: taking the similarity score of the averaged feature of the two image sets.\n\nCNN-avgmax: taking the average of the maximum similarity scores of two image sets."
]
}
],
"annotation_id": [
"bdc283d7bd798de2ad5934ef59f1ff34e3db6d9a"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"CNN-mean",
"CNN-avgmax"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We compare our approach with two baseline vision-based methods proposed in BIBREF6 , BIBREF7 , which measure the similarity of two sets of global visual features for bilingual lexicon induction:",
"CNN-mean: taking the similarity score of the averaged feature of the two image sets.",
"CNN-avgmax: taking the average of the maximum similarity scores of two image sets."
],
"highlighted_evidence": [
"We compare our approach with two baseline vision-based methods proposed in BIBREF6 , BIBREF7 , which measure the similarity of two sets of global visual features for bilingual lexicon induction:\n\nCNN-mean: taking the similarity score of the averaged feature of the two image sets.\n\nCNN-avgmax: taking the average of the maximum similarity scores of two image sets."
]
}
],
"annotation_id": [
"1c1034bb22669723f38cbe0bf4e9ddd20d3a62f3"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"German-English, French-English, and Japanese-English"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We carry out experiments on multiple language pairs including German-English, French-English, and Japanese-English. The experimental results show that the proposed multi-lingual caption model not only achieves better caption performance than independent mono-lingual models for data-scarce languages, but also can induce the two types of features, linguistic and visual features, for different languages in joint spaces. Our proposed method consistently outperforms previous state-of-the-art vision-based bilingual word induction approaches on different languages. The contributions of this paper are as follows:"
],
"highlighted_evidence": [
"We carry out experiments on multiple language pairs including German-English, French-English, and Japanese-English. "
]
},
{
"unanswerable": false,
"extractive_spans": [
"multiple language pairs including German-English, French-English, and Japanese-English."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We carry out experiments on multiple language pairs including German-English, French-English, and Japanese-English. The experimental results show that the proposed multi-lingual caption model not only achieves better caption performance than independent mono-lingual models for data-scarce languages, but also can induce the two types of features, linguistic and visual features, for different languages in joint spaces. Our proposed method consistently outperforms previous state-of-the-art vision-based bilingual word induction approaches on different languages. The contributions of this paper are as follows:"
],
"highlighted_evidence": [
"We carry out experiments on multiple language pairs including German-English, French-English, and Japanese-English. The experimental results show that the proposed multi-lingual caption model not only achieves better caption performance than independent mono-lingual models for data-scarce languages, but also can induce the two types of features, linguistic and visual features, for different languages in joint spaces."
]
}
],
"annotation_id": [
"5d299b615630a19692ca84d5cb48298cb9d97168",
"62775a4f4030a57e74e1711c20a2c7c9869075df"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Figure 1: Comparison of previous vision-based approaches and our proposed approach for bilingual lexicon induction. Best viewed in color.",
"Figure 2: Multi-lingual image caption model. The source and target language caption models share the same image encoder and language decoder, which enforce the word embeddings of different languages to project in the same space.",
"Table 1: Statistics of image caption datasets.",
"Table 2: Image captioning performance of different languages on the Multi30k dataset.",
"Figure 3: Visual groundings learned from the caption model.",
"Table 4: Performance of German to English word translation.",
"Table 6: Comparison of the image captioning models’ impact on the bilingual lexicon induction. The acronym L is for linguistic and V is for the visual feature.",
"Table 5: German-to-English word translation examples. ‘de’ is the source German word and ‘en’ is the groundtruth target English word. The ‘rank’ denotes the position of the groundtruth target word in the candidate ranking list. The ‘top3 translation’ presents the top 3 translated words of the source word by our system.",
"Figure 4: Performance comparison of German-to-English word translation on the MMID dataset. The word translation performance is broken down by part-of-speech labels.",
"Table 7: Performance of French to English word translation on the MMID dataset.",
"Table 8: Performance of Japanese to English word translation."
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"5-Figure3-1.png",
"6-Table4-1.png",
"6-Table6-1.png",
"6-Table5-1.png",
"7-Figure4-1.png",
"7-Table7-1.png",
"7-Table8-1.png"
]
} |
1912.13072 | AraNet: A Deep Learning Toolkit for Arabic Social Media | We describe AraNet, a collection of deep learning Arabic social media processing tools. Namely, we exploit an extensive host of publicly available and novel social media datasets to train bidirectional encoders from transformer models (BERT) to predict age, dialect, gender, emotion, irony, and sentiment. AraNet delivers state-of-the-art performance on a number of the cited tasks and competitively on others. In addition, AraNet has the advantage of being exclusively based on a deep learning framework and hence feature engineering free. To the best of our knowledge, AraNet is the first to performs predictions across such a wide range of tasks for Arabic NLP and thus meets a critical needs. We publicly release AraNet to accelerate research and facilitate comparisons across the different tasks. | {
"section_name": [
"Introduction",
"Introduction ::: ",
"Methods",
"Data and Models ::: Age and Gender",
"Data and Models ::: Age and Gender ::: ",
"Data and Models ::: Dialect",
"Data and Models ::: Emotion",
"Data and Models ::: Irony",
"Data and Models ::: Sentiment",
"AraNet Design and Use",
"Related Works",
"Conclusion"
],
"paragraphs": [
[
"The proliferation of social media has made it possible to study large online communities at scale, thus making important discoveries that can facilitate decision making, guide policies, improve health and well-being, aid disaster response, etc. The wide host of languages, languages varieties, and dialects used on social media and the nuanced differences between users of various backgrounds (e.g., different age groups, gender identities) make it especially difficult to derive sufficiently valuable insights based on single prediction tasks. For these reasons, it would be desirable to offer NLP tools that can help stitch together a complete picture of an event across different geographical regions as impacting, and being impacted by, individuals of different identities. We offer AraNet as one such tool for Arabic social media processing."
],
[
"For Arabic, a collection of languages and varieties spoken by a wide population of $\\sim 400$ million native speakers covering a vast geographical region (shown in Figure FIGREF2), no such suite of tools currently exists. Many works have focused on sentiment analysis, e.g., BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 and dialect identification BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. However, there is generally rarity of resources on other tasks such as gender and age detection. This motivates our toolkit, which we hope can meet the current critical need for studying Arabic communities online. This is especially valuable given the waves of protests, uprisings, and revolutions that have been sweeping the region during the last decade.",
"Although we create new models for tasks such as sentiment analysis and gender detection as part of AraNet, our focus is more on putting together the toolkit itself and providing strong baselines that can be compared to. Hence, although we provide some baseline models for some of the tasks, we do not explicitly compare to previous research since most existing works either exploit smaller data (and so it will not be a fair comparison), use methods pre-dating BERT (and so will likely be outperformed by our models) . For many of the tasks we model, there have not been standard benchmarks for comparisons across models. This makes it difficult to measure progress and identify areas worthy of allocating efforts and budgets. As such, by publishing our toolkit models, we believe model-based comparisons will be one way to relieve this bottleneck. For these reasons, we also package models from our recent works on dialect BIBREF12 and irony BIBREF14 as part of AraNet .",
"The rest of the paper is organized as follows: In Section SECREF2 we describe our methods. In Section SECREF3, we describe or refer to published literature for the dataset we exploit for each task and provide results our corresponding model acquires. Section SECREF4 is about AraNet design and use, and we overview related works in Section SECREF5 We conclude in Section SECREF6"
],
[
"Supervised BERT. Across all our tasks, we use Bidirectional Encoder Representations from Transformers (BERT). BERT BIBREF15, dispenses with recurrence and convolution. It is based on a multi-layer bidirectional Transformer encoder BIBREF16, with multi-head attention. It uses masked language models to enable pre-trained deep bidirectional representations, in addition to a binary next sentence prediction task. The pre-trained BERT can be easily fine-tuned on a wide host of sentence-level and token-level tasks. All our models are trained in a fully supervised fashion, with dialect id being the only task where we leverage semi-supervised learning. We briefly outline our semi-supervised methods next.",
"Self-Training. Only for the dialect id task, we investigate augmenting our human-labeled training data with automatically-predicted data from self-training. Self-training is a wrapper method for semi-supervised learning BIBREF17, BIBREF18 where a classifier is initially trained on a (usually small) set of labeled samples $\\textbf {\\textit {D}}^{l}$, then is used to classify an unlabeled sample set $\\textbf {\\textit {D}}^{u}$. Most confident predictions acquired by the original supervised model are added to the labeled set, and the model is iteratively re-trained. We perform self-training using different confidence thresholds and choose different percentages from predicted data to add to our train. We only report best settings here and the reader is referred to our winning system on the MADAR shared task for more details on these different settings BIBREF12.",
"Implementation & Models Parameters. For all our tasks, we use the BERT-Base Multilingual Cased model released by the authors . The model is trained on 104 languages (including Arabic) with 12 layer, 768 hidden units each, 12 attention heads, and has 110M parameters in entire model. The model has 119,547 shared WordPieces vocabulary, and was pre-trained on the entire Wikipedia for each language. For fine-tuning, we use a maximum sequence size of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 15 epochs and choose the best model based on performance on a development set. We use the same hyper-parameters in all of our BERT models. We fine-tune BERT on each respective labeled dataset for each task. For BERT input, we apply WordPiece tokenization, setting the maximal sequence length to 50 words/WordPieces. For all tasks, we use a TensorFlow implementation. An exception is the sentiment analysis task, where we used a PyTorch implementation with the same hyper-parameters but with a learning rate $2e-6$.",
"Pre-processing. Most of our training data in all tasks come from Twitter. Exceptions are in some of the datasets we use for sentiment analysis, which we point out in Section SECREF23. Our pre-processing thus incorporates methods to clean tweets, other datasets (e.g., from the news domain) being much less noisy. For pre-processing, we remove all usernames, URLs, and diacritics in the data."
],
[
"Arab-Tweet. For modeling age and gender, we use Arap-Tweet BIBREF19 , which we will refer to as Arab-Tweet. Arab-tweet is a tweet dataset of 11 Arabic regions from 17 different countries. For each region, data from 100 Twitter users were crawled. Users needed to have posted at least 2,000 and were selected based on an initial list of seed words characteristic of each region. The seed list included words such as <برشة> /barsha/ ‘many’ for Tunisian Arabic and <وايد> /wayed/ ‘many’ for Gulf Arabic. BIBREF19 employed human annotators to verify that users do belong to each respective region. Annotators also assigned gender labels from the set male, female and age group labels from the set under-25, 25-to34, above-35 at the user-level, which in turn is assigned at tweet level. Tweets with less than 3 words and re-tweets were removed. Refer to BIBREF19 for details about how annotation was carried out. We provide a description of the data in Table TABREF10. Table TABREF10 also provides class breakdown across our splits.We note that BIBREF19 do not report classification models exploiting the data."
],
[
"We shuffle the Arab-tweet dataset and split it into 80% training (TRAIN), 10% development (DEV), and 10% test (TEST). The distribution of classes in our splits is in Table TABREF10. For pre-processing, we reduce 2 or more consecutive repetitions of the same character into only 2 and remove diacritics. With this dataset, we train a small unidirectional GRU (small-GRU) with a single 500-units hidden layer and $dropout=0.5$ as a baseline. Small-GRU is trained with the TRAIN set, batch size = 8, and up to 30 words of each sequence. Each word in the input sequence is represented as a trainable 300-dimension vector. We use the top 100K words which are weighted by mutual information as our vocabulary in the embedding layer. We evaluate the model on TEST set. Table TABREF14 show small-GRU obtain36.29% XX acc on age classification, and 53.37% acc on gender detection. We also report the accuracy of fine-tuned BERT models on TEST set in Table TABREF14. We can find that BERT models significantly perform better than our baseline on the two tasks. It improve with 15.13% (for age) and 11.93% acc (for gender) over the small-GRU.",
"UBC Twitter Gender Dataset. We also develop an in-house Twitter dataset for gender. We manually labeled 1,989 users from each of the 21 Arab countries. The data had 1,246 “male\", 528 “female\", and 215 unknown users. We remove the “unknown\" category and balance the dataset to have 528 from each of the two `male\" and “female\" categories. We ended with 69,509 tweets for `male\" and 67,511 tweets for “female\". We split the users into 80% TRAIN set (110,750 tweets for 845 users), 10% DEV set (14,158 tweets for 106 users), and 10% TEST set (12,112 tweets for 105 users). We, then, model this dataset with BERT-Base, Multilingual Cased model and evaluate on development and test sets. Table TABREF15 shows that fine-tuned model obtains 62.42% acc on DEV and 60.54% acc on TEST.",
"We also combine the Arab-tweet gender dataset with our UBC-Twitter dataset for gender on training, development, and test, respectively, to obtain new TRAIN, DEV, and TEST. We fine-tune the BERT-Base, Multilingual Cased model with the combined TRAIN and evaluate on combined DEV and TEST. As Table TABREF15 shows, the model obtains 65.32% acc on combined DEV set, and 65.32% acc on combined TEST set. This is the model we package in AraNet ."
],
[
"The dialect identification model in AraNet is based on our winning system in the MADAR shared task 2 BIBREF20 as described in BIBREF12. The corpus is divided into train, dev and test, and the organizers masked test set labels. We lost some tweets from training data when we crawled using tweet ids, ultimately acquiring 2,036 (TRAIN-A), 281 (DEV) and 466 (TEST). We also make use of the task 1 corpus (95,000 sentences BIBREF21). More specifically, we concatenate the task 1 data to the training data of task 2, to create TRAIN-B. Again, note that TEST labels were only released to participants after the official task evaluation. Table TABREF17 shows statistics of the data. We used tweets from 21 Arab countries as distributed by task organizers, except that we lost some tweets when we crawled using tweet ids. We had 2,036 (TRAIN-A), 281 (DEV) and 466 (TEST). For our experiments, we also make use of the task 1 corpus (95,000 sentences BIBREF21). More specifically, we concatenate the task 1 data to the training data of task 2, to create TRAIN-B. Note that both DEV and TEST across our experiments are exclusively the data released in task 2, as described above. TEST labels were only released to participants after the official task evaluation. Table TABREF17 shows statistics of the data. More information about the data is in BIBREF21. We use TRAIN-A to perform supervised modeling with BERT and TRAIN-B for self training, under various conditions. We refer the reader to BIBREF12 for more information about our different experimental settings on dialect id. We acquire our best results with self-training, with a classification accuracy of 49.39% and F1 score at 35.44. This is the winning system model in the MADAR shared task and we showed in BIBREF12 that our tweet-level predictions can be ported to user-level prediction. On user-level detection, our models perform superbly, with 77.40% acc and 71.70% F1 score on unseen MADAR blind test data."
],
[
"We make use of two datasets, the LAMA-DINA dataset from BIBREF22, a Twitter dataset with a combination of gold labels from BIBREF23 and distant supervision labels. The tweets are labeled with the Plutchik 8 primary emotions from the set: {anger, anticipation, disgust, fear, joy, sadness, surprise, trust}. The distant supervision approach depends on use of seed phrases with the Arabic first person pronoun انا> (Eng. “I\") + a seed word expressing an emotion, e.g., فرحان> (Eng. “happy\"). The manually labeled part of the data comprises tweets carrying the seed phrases verified by human annotators $9,064$ tweets for inclusion of the respective emotion. The rest of the dataset is only labeled using distant supervision (LAMA-DIST) ($182,605$ tweets) . For more information about the dataset, readers are referred to BIBREF22. The data distribution over the emotion classes is in Table TABREF20. We combine LAMA+DINA and LAMA-DIST training set and refer to this new training set as LAMA-D2 (189,903 tweets). We fine-tune BERT-Based, Multilingual Cased on the LAMA-D2 and evaluate the model with same DEV and TEST sets from LAMA+DINA. On DEV set, the fine-tuned BERT model obtains 61.43% on accuracy and 58.83 on $F_1$ score. On TEST set, we acquire 62.38% acc and 60.32% $F_1$ score."
],
[
"We use the dataset for irony identification on Arabic tweets released by IDAT@FIRE2019 shared-task BIBREF24. The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e., targeted political figures or events) and the positive class involves ironic hashtags such as #sokhria, #tahakoum, and #maskhara (Arabic variants for “irony\"). Duplicates, retweets, and non-intelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine.",
"IDAT@FIRE2019 BIBREF24 is set up as a binary classification task where tweets are assigned labels from the set {ironic, non-ironic}. A total of 4,024 tweets were released by organizers as training data. In addition, 1,006 tweets were used by organizers as test data. Test labels were not release; and teams were expected to submit the predictions produced by their systems on the test split. For our models, we split the 4,024 released training data into 90% TRAIN ($n$=3,621 tweets; `ironic'=1,882 and `non-ironic'=1,739) and 10% DEV ($n$=403 tweets; `ironic'=209 and `non-ironic'=194). We use the same small-GRU architecture of Section 3.1 as our baselines. We fine-tune BERT-Based, Multilingual Cased model on our TRAIN, and evaluate on DEV. The small-GRU obtain 73.70% accuracy and 73.47% $F_1$ score. BERT model significantly out-performance than small-GRU, which achieve 81.64% accuracy and 81.62% $F_1$ score."
],
[
"We collect 15 datasets related to sentiment analysis of Arabic, including MSA and dialects BIBREF25, BIBREF26, BIBREF27, BIBREF1, BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34. Table TABREF28 shows all the corpora we use. These datasets involve different types of sentiment analysis tasks such as binary classification (i.e., negative or positive), 3-way classification (i.e., negative, neutral, or positive), and subjective language detection. To combine these datasets for binary sentiment classification, we normalize different types of label to binary labels in the set $\\lbrace `positive^{\\prime }, `negative^{\\prime }\\rbrace $ by following rules:",
"{Positive, Pos, or High-Pos} to `positive';",
"{Negative, Neg, or High-Neg} to `negative';",
"Exclude samples which label is not `positive' or `negative' such as `obj', `mixed', `neut', or `neutral'.",
"After label normalization, we obtain 126,766 samples. We split this datase into 80% training (TRAIN), 10% development (DEV), and 10% test (TEST). The distribution of classes in our splits is presented in Table TABREF27. We fine-tune pre-trained BERT on the TRAIN set using PyTorch implementation with $2e-6$ learning rate and 15 epochs, as explained in Section SECREF2. Our best model on the DEV set obtains 80.24% acc and 80.24% $F_1$. We evaluate this best model on TEST set and obtain 77.31% acc and 76.67% $F_1$."
],
[
"AraNet consists of identifier tools including age, gender, dialect, emotion, irony and sentiment. Each tool comes with an embedded model. The tool comes with modules for performing normalization and tokenization. AraNet can be used as a Python library or a command-line tool:",
"Python Library: Importing AraNet module as a Python library provides identifiers’ functions. Prediction is based on a text or a path to a file and returns the identified class label. It also returns the probability distribution over all available class labels if needed. Figure FIGREF34 shows two examples of using the tool as Python library.",
"Command-line tool: AraNet provides scripts supporting both command-line and interactive mode. Command-line mode accepts a text or file path. Interaction mode is good for quick interactive line-by-line experiments and also pipeline redirections.",
"AraNet is available through pip or from source on GitHub with detailed documentation."
],
[
"As we pointed out earlier, there has been several works on some of the tasks but less on others. By far, Arabic sentiment analysis has been the most popular task. Several works have been performed for MSA BIBREF35, BIBREF0 and dialectal BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 sentiment analysis. A number of works have also been published for dialect detection, including BIBREF9, BIBREF10, BIBREF8, BIBREF11. Some works have been performed on the tasks of age detection BIBREF19, BIBREF36, gender detection BIBREF19, BIBREF36, irony identification BIBREF37, BIBREF24, and emotion analysis BIBREF38, BIBREF22.",
"A number of tools exist for Arabic natural language processing,including Penn Arabic treebank BIBREF39, POS tagger BIBREF40, BIBREF41, Buckwalter Morphological Analyzer BIBREF42 and Mazajak BIBREF7 for sentiment analysis ."
],
[
"We presented AraNet, a deep learning toolkit for a host of Arabic social media processing. AraNet predicts age, dialect, gender, emotion, irony, and sentiment from social media posts. It delivers state-of-the-art and competitive performance on these tasks and has the advantage of using a unified, simple framework based on the recently-developed BERT model. AraNet has the potential to alleviate issues related to comparing across different Arabic social media NLP tasks, by providing one way to test new models against AraNet predictions. Our toolkit can be used to make important discoveries on the wide region of the Arab world, and can enhance our understating of Arab online communication. AraNet will be publicly available upon acceptance."
]
]
} | {
"question": [
"Did they experiment on all the tasks?",
"What models did they compare to?",
"What datasets are used in training?"
],
"question_id": [
"2419b38624201d678c530eba877c0c016cccd49f",
"b99d100d17e2a121c3c8ff789971ce66d1d40a4d",
"578d0b23cb983b445b1a256a34f969b34d332075"
],
"nlp_background": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"Implementation & Models Parameters. For all our tasks, we use the BERT-Base Multilingual Cased model released by the authors . The model is trained on 104 languages (including Arabic) with 12 layer, 768 hidden units each, 12 attention heads, and has 110M parameters in entire model. The model has 119,547 shared WordPieces vocabulary, and was pre-trained on the entire Wikipedia for each language. For fine-tuning, we use a maximum sequence size of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 15 epochs and choose the best model based on performance on a development set. We use the same hyper-parameters in all of our BERT models. We fine-tune BERT on each respective labeled dataset for each task. For BERT input, we apply WordPiece tokenization, setting the maximal sequence length to 50 words/WordPieces. For all tasks, we use a TensorFlow implementation. An exception is the sentiment analysis task, where we used a PyTorch implementation with the same hyper-parameters but with a learning rate $2e-6$.",
"We presented AraNet, a deep learning toolkit for a host of Arabic social media processing. AraNet predicts age, dialect, gender, emotion, irony, and sentiment from social media posts. It delivers state-of-the-art and competitive performance on these tasks and has the advantage of using a unified, simple framework based on the recently-developed BERT model. AraNet has the potential to alleviate issues related to comparing across different Arabic social media NLP tasks, by providing one way to test new models against AraNet predictions. Our toolkit can be used to make important discoveries on the wide region of the Arab world, and can enhance our understating of Arab online communication. AraNet will be publicly available upon acceptance."
],
"highlighted_evidence": [
"For all tasks, we use a TensorFlow implementation.",
"AraNet predicts age, dialect, gender, emotion, irony, and sentiment from social media posts. It delivers state-of-the-art and competitive performance on these tasks and has the advantage of using a unified, simple framework based on the recently-developed BERT model. "
]
}
],
"annotation_id": [
"1e6f1f17079c5fd39dec880a5002fb9fc8d59412"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" we do not explicitly compare to previous research since most existing works either exploit smaller data (and so it will not be a fair comparison), use methods pre-dating BERT (and so will likely be outperformed by our models)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Although we create new models for tasks such as sentiment analysis and gender detection as part of AraNet, our focus is more on putting together the toolkit itself and providing strong baselines that can be compared to. Hence, although we provide some baseline models for some of the tasks, we do not explicitly compare to previous research since most existing works either exploit smaller data (and so it will not be a fair comparison), use methods pre-dating BERT (and so will likely be outperformed by our models) . For many of the tasks we model, there have not been standard benchmarks for comparisons across models. This makes it difficult to measure progress and identify areas worthy of allocating efforts and budgets. As such, by publishing our toolkit models, we believe model-based comparisons will be one way to relieve this bottleneck. For these reasons, we also package models from our recent works on dialect BIBREF12 and irony BIBREF14 as part of AraNet ."
],
"highlighted_evidence": [
"Although we create new models for tasks such as sentiment analysis and gender detection as part of AraNet, our focus is more on putting together the toolkit itself and providing strong baselines that can be compared to. Hence, although we provide some baseline models for some of the tasks, we do not explicitly compare to previous research since most existing works either exploit smaller data (and so it will not be a fair comparison), use methods pre-dating BERT (and so will likely be outperformed by our models) . For many of the tasks we model, there have not been standard benchmarks for comparisons across models. This makes it difficult to measure progress and identify areas worthy of allocating efforts and budgets."
]
}
],
"annotation_id": [
"9b44463b816b36f0753046936b967760414f856d"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Arap-Tweet BIBREF19 ",
"an in-house Twitter dataset for gender",
"the MADAR shared task 2 BIBREF20",
"the LAMA-DINA dataset from BIBREF22",
"LAMA-DIST",
"Arabic tweets released by IDAT@FIRE2019 shared-task BIBREF24",
"BIBREF25, BIBREF26, BIBREF27, BIBREF1, BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Arab-Tweet. For modeling age and gender, we use Arap-Tweet BIBREF19 , which we will refer to as Arab-Tweet. Arab-tweet is a tweet dataset of 11 Arabic regions from 17 different countries. For each region, data from 100 Twitter users were crawled. Users needed to have posted at least 2,000 and were selected based on an initial list of seed words characteristic of each region. The seed list included words such as <برشة> /barsha/ ‘many’ for Tunisian Arabic and <وايد> /wayed/ ‘many’ for Gulf Arabic. BIBREF19 employed human annotators to verify that users do belong to each respective region. Annotators also assigned gender labels from the set male, female and age group labels from the set under-25, 25-to34, above-35 at the user-level, which in turn is assigned at tweet level. Tweets with less than 3 words and re-tweets were removed. Refer to BIBREF19 for details about how annotation was carried out. We provide a description of the data in Table TABREF10. Table TABREF10 also provides class breakdown across our splits.We note that BIBREF19 do not report classification models exploiting the data.",
"UBC Twitter Gender Dataset. We also develop an in-house Twitter dataset for gender. We manually labeled 1,989 users from each of the 21 Arab countries. The data had 1,246 “male\", 528 “female\", and 215 unknown users. We remove the “unknown\" category and balance the dataset to have 528 from each of the two `male\" and “female\" categories. We ended with 69,509 tweets for `male\" and 67,511 tweets for “female\". We split the users into 80% TRAIN set (110,750 tweets for 845 users), 10% DEV set (14,158 tweets for 106 users), and 10% TEST set (12,112 tweets for 105 users). We, then, model this dataset with BERT-Base, Multilingual Cased model and evaluate on development and test sets. Table TABREF15 shows that fine-tuned model obtains 62.42% acc on DEV and 60.54% acc on TEST.",
"The dialect identification model in AraNet is based on our winning system in the MADAR shared task 2 BIBREF20 as described in BIBREF12. The corpus is divided into train, dev and test, and the organizers masked test set labels. We lost some tweets from training data when we crawled using tweet ids, ultimately acquiring 2,036 (TRAIN-A), 281 (DEV) and 466 (TEST). We also make use of the task 1 corpus (95,000 sentences BIBREF21). More specifically, we concatenate the task 1 data to the training data of task 2, to create TRAIN-B. Again, note that TEST labels were only released to participants after the official task evaluation. Table TABREF17 shows statistics of the data. We used tweets from 21 Arab countries as distributed by task organizers, except that we lost some tweets when we crawled using tweet ids. We had 2,036 (TRAIN-A), 281 (DEV) and 466 (TEST). For our experiments, we also make use of the task 1 corpus (95,000 sentences BIBREF21). More specifically, we concatenate the task 1 data to the training data of task 2, to create TRAIN-B. Note that both DEV and TEST across our experiments are exclusively the data released in task 2, as described above. TEST labels were only released to participants after the official task evaluation. Table TABREF17 shows statistics of the data. More information about the data is in BIBREF21. We use TRAIN-A to perform supervised modeling with BERT and TRAIN-B for self training, under various conditions. We refer the reader to BIBREF12 for more information about our different experimental settings on dialect id. We acquire our best results with self-training, with a classification accuracy of 49.39% and F1 score at 35.44. This is the winning system model in the MADAR shared task and we showed in BIBREF12 that our tweet-level predictions can be ported to user-level prediction. On user-level detection, our models perform superbly, with 77.40% acc and 71.70% F1 score on unseen MADAR blind test data.",
"We make use of two datasets, the LAMA-DINA dataset from BIBREF22, a Twitter dataset with a combination of gold labels from BIBREF23 and distant supervision labels. The tweets are labeled with the Plutchik 8 primary emotions from the set: {anger, anticipation, disgust, fear, joy, sadness, surprise, trust}. The distant supervision approach depends on use of seed phrases with the Arabic first person pronoun انا> (Eng. “I\") + a seed word expressing an emotion, e.g., فرحان> (Eng. “happy\"). The manually labeled part of the data comprises tweets carrying the seed phrases verified by human annotators $9,064$ tweets for inclusion of the respective emotion. The rest of the dataset is only labeled using distant supervision (LAMA-DIST) ($182,605$ tweets) . For more information about the dataset, readers are referred to BIBREF22. The data distribution over the emotion classes is in Table TABREF20. We combine LAMA+DINA and LAMA-DIST training set and refer to this new training set as LAMA-D2 (189,903 tweets). We fine-tune BERT-Based, Multilingual Cased on the LAMA-D2 and evaluate the model with same DEV and TEST sets from LAMA+DINA. On DEV set, the fine-tuned BERT model obtains 61.43% on accuracy and 58.83 on $F_1$ score. On TEST set, we acquire 62.38% acc and 60.32% $F_1$ score.",
"We use the dataset for irony identification on Arabic tweets released by IDAT@FIRE2019 shared-task BIBREF24. The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e., targeted political figures or events) and the positive class involves ironic hashtags such as #sokhria, #tahakoum, and #maskhara (Arabic variants for “irony\"). Duplicates, retweets, and non-intelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine.",
"We collect 15 datasets related to sentiment analysis of Arabic, including MSA and dialects BIBREF25, BIBREF26, BIBREF27, BIBREF1, BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34. Table TABREF28 shows all the corpora we use. These datasets involve different types of sentiment analysis tasks such as binary classification (i.e., negative or positive), 3-way classification (i.e., negative, neutral, or positive), and subjective language detection. To combine these datasets for binary sentiment classification, we normalize different types of label to binary labels in the set $\\lbrace `positive^{\\prime }, `negative^{\\prime }\\rbrace $ by following rules:"
],
"highlighted_evidence": [
"Arab-Tweet. For modeling age and gender, we use Arap-Tweet BIBREF19 , which we will refer to as Arab-Tweet.",
"UBC Twitter Gender Dataset. We also develop an in-house Twitter dataset for gender. We manually labeled 1,989 users from each of the 21 Arab countries.",
"The dialect identification model in AraNet is based on our winning system in the MADAR shared task 2 BIBREF20 as described in BIBREF12. The corpus is divided into train, dev and test, and the organizers masked test set labels.",
"We make use of two datasets, the LAMA-DINA dataset from BIBREF22, a Twitter dataset with a combination of gold labels from BIBREF23 and distant supervision labels.",
"The rest of the dataset is only labeled using distant supervision (LAMA-DIST) ($182,605$ tweets) . For more information about the dataset, readers are referred to BIBREF22. ",
"We use the dataset for irony identification on Arabic tweets released by IDAT@FIRE2019 shared-task BIBREF24.",
"We collect 15 datasets related to sentiment analysis of Arabic, including MSA and dialects BIBREF25, BIBREF26, BIBREF27, BIBREF1, BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34. Table TABREF28 shows all the corpora we use. "
]
},
{
"unanswerable": false,
"extractive_spans": [
" Arap-Tweet ",
"UBC Twitter Gender Dataset",
"MADAR ",
"LAMA-DINA ",
"IDAT@FIRE2019",
"15 datasets related to sentiment analysis of Arabic, including MSA and dialects"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Arab-Tweet. For modeling age and gender, we use Arap-Tweet BIBREF19 , which we will refer to as Arab-Tweet. Arab-tweet is a tweet dataset of 11 Arabic regions from 17 different countries. For each region, data from 100 Twitter users were crawled. Users needed to have posted at least 2,000 and were selected based on an initial list of seed words characteristic of each region. The seed list included words such as <برشة> /barsha/ ‘many’ for Tunisian Arabic and <وايد> /wayed/ ‘many’ for Gulf Arabic. BIBREF19 employed human annotators to verify that users do belong to each respective region. Annotators also assigned gender labels from the set male, female and age group labels from the set under-25, 25-to34, above-35 at the user-level, which in turn is assigned at tweet level. Tweets with less than 3 words and re-tweets were removed. Refer to BIBREF19 for details about how annotation was carried out. We provide a description of the data in Table TABREF10. Table TABREF10 also provides class breakdown across our splits.We note that BIBREF19 do not report classification models exploiting the data.",
"UBC Twitter Gender Dataset. We also develop an in-house Twitter dataset for gender. We manually labeled 1,989 users from each of the 21 Arab countries. The data had 1,246 “male\", 528 “female\", and 215 unknown users. We remove the “unknown\" category and balance the dataset to have 528 from each of the two `male\" and “female\" categories. We ended with 69,509 tweets for `male\" and 67,511 tweets for “female\". We split the users into 80% TRAIN set (110,750 tweets for 845 users), 10% DEV set (14,158 tweets for 106 users), and 10% TEST set (12,112 tweets for 105 users). We, then, model this dataset with BERT-Base, Multilingual Cased model and evaluate on development and test sets. Table TABREF15 shows that fine-tuned model obtains 62.42% acc on DEV and 60.54% acc on TEST.",
"The dialect identification model in AraNet is based on our winning system in the MADAR shared task 2 BIBREF20 as described in BIBREF12. The corpus is divided into train, dev and test, and the organizers masked test set labels. We lost some tweets from training data when we crawled using tweet ids, ultimately acquiring 2,036 (TRAIN-A), 281 (DEV) and 466 (TEST). We also make use of the task 1 corpus (95,000 sentences BIBREF21). More specifically, we concatenate the task 1 data to the training data of task 2, to create TRAIN-B. Again, note that TEST labels were only released to participants after the official task evaluation. Table TABREF17 shows statistics of the data. We used tweets from 21 Arab countries as distributed by task organizers, except that we lost some tweets when we crawled using tweet ids. We had 2,036 (TRAIN-A), 281 (DEV) and 466 (TEST). For our experiments, we also make use of the task 1 corpus (95,000 sentences BIBREF21). More specifically, we concatenate the task 1 data to the training data of task 2, to create TRAIN-B. Note that both DEV and TEST across our experiments are exclusively the data released in task 2, as described above. TEST labels were only released to participants after the official task evaluation. Table TABREF17 shows statistics of the data. More information about the data is in BIBREF21. We use TRAIN-A to perform supervised modeling with BERT and TRAIN-B for self training, under various conditions. We refer the reader to BIBREF12 for more information about our different experimental settings on dialect id. We acquire our best results with self-training, with a classification accuracy of 49.39% and F1 score at 35.44. This is the winning system model in the MADAR shared task and we showed in BIBREF12 that our tweet-level predictions can be ported to user-level prediction. On user-level detection, our models perform superbly, with 77.40% acc and 71.70% F1 score on unseen MADAR blind test data.",
"Data and Models ::: Emotion",
"We make use of two datasets, the LAMA-DINA dataset from BIBREF22, a Twitter dataset with a combination of gold labels from BIBREF23 and distant supervision labels. The tweets are labeled with the Plutchik 8 primary emotions from the set: {anger, anticipation, disgust, fear, joy, sadness, surprise, trust}. The distant supervision approach depends on use of seed phrases with the Arabic first person pronoun انا> (Eng. “I\") + a seed word expressing an emotion, e.g., فرحان> (Eng. “happy\"). The manually labeled part of the data comprises tweets carrying the seed phrases verified by human annotators $9,064$ tweets for inclusion of the respective emotion. The rest of the dataset is only labeled using distant supervision (LAMA-DIST) ($182,605$ tweets) . For more information about the dataset, readers are referred to BIBREF22. The data distribution over the emotion classes is in Table TABREF20. We combine LAMA+DINA and LAMA-DIST training set and refer to this new training set as LAMA-D2 (189,903 tweets). We fine-tune BERT-Based, Multilingual Cased on the LAMA-D2 and evaluate the model with same DEV and TEST sets from LAMA+DINA. On DEV set, the fine-tuned BERT model obtains 61.43% on accuracy and 58.83 on $F_1$ score. On TEST set, we acquire 62.38% acc and 60.32% $F_1$ score.",
"We use the dataset for irony identification on Arabic tweets released by IDAT@FIRE2019 shared-task BIBREF24. The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e., targeted political figures or events) and the positive class involves ironic hashtags such as #sokhria, #tahakoum, and #maskhara (Arabic variants for “irony\"). Duplicates, retweets, and non-intelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine.",
"We collect 15 datasets related to sentiment analysis of Arabic, including MSA and dialects BIBREF25, BIBREF26, BIBREF27, BIBREF1, BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34. Table TABREF28 shows all the corpora we use. These datasets involve different types of sentiment analysis tasks such as binary classification (i.e., negative or positive), 3-way classification (i.e., negative, neutral, or positive), and subjective language detection. To combine these datasets for binary sentiment classification, we normalize different types of label to binary labels in the set $\\lbrace `positive^{\\prime }, `negative^{\\prime }\\rbrace $ by following rules:"
],
"highlighted_evidence": [
"For modeling age and gender, we use Arap-Tweet BIBREF19 , which we will refer to as Arab-Tweet.",
"UBC Twitter Gender Dataset. We also develop an in-house Twitter dataset for gender. ",
"The dialect identification model in AraNet is based on our winning system in the MADAR shared task 2 BIBREF20 as described in BIBREF12. ",
"Emotion\nWe make use of two datasets, the LAMA-DINA dataset from BIBREF22, a Twitter dataset with a combination of gold labels from BIBREF23 and distant supervision labels. ",
"We use the dataset for irony identification on Arabic tweets released by IDAT@FIRE2019 shared-task BIBREF24.",
"We collect 15 datasets related to sentiment analysis of Arabic, including MSA and dialects BIBREF25, BIBREF26, BIBREF27, BIBREF1, BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34."
]
}
],
"annotation_id": [
"5d7ee4a4ac4dfc9729570afa0aa189959fe9de25",
"bd0a4cc46a4c8bef5c8b1c2f8d7e8b72d4f81308"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
} | {
"caption": [
"Figure 1: A map of Arab countries. Our different datasets cover varying regions of the Arab world as we describe in each section.",
"Table 1: Distribution of age and gender classes in our Arab-Tweet data splits",
"Table 2: Model performance in accuracy of Arab-Tweet age and gender classification tasks.",
"Table 4: Distribution of classes within the MADAR twitter corpus.",
"Table 6: Model performance of irony detection.",
"Table 5: Class distribution of LAMA+DINA and LAMADIST datasets.",
"Table 7: Distribution of sentiment analysis classes in our data splits.",
"Table 8: Sentiment Analysis Datasets. SA: Sentiment Analysis, SSA: Subjective Sentiment Analysis.",
"Figure 2: AraNet usage and output as Python library.",
"Figure 3: AraNet usage examples as command-line mode, pipeline and interactive mode."
],
"file": [
"1-Figure1-1.png",
"3-Table1-1.png",
"3-Table2-1.png",
"3-Table4-1.png",
"4-Table6-1.png",
"4-Table5-1.png",
"4-Table7-1.png",
"5-Table8-1.png",
"5-Figure2-1.png",
"5-Figure3-1.png"
]
} |
1712.09127 | Generative Adversarial Nets for Multiple Text Corpora | Generative adversarial nets (GANs) have been successfully applied to the artificial generation of image data. In terms of text data, much has been done on the artificial generation of natural language from a single corpus. We consider multiple text corpora as the input data, for which there can be two applications of GANs: (1) the creation of consistent cross-corpus word embeddings given different word embeddings per corpus; (2) the generation of robust bag-of-words document embeddings for each corpora. We demonstrate our GAN models on real-world text data sets from different corpora, and show that embeddings from both models lead to improvements in supervised learning problems. | {
"section_name": [
"Introduction",
"Literature Review",
"Models and Algorithms",
"weGAN: Training cross-corpus word embeddings",
"deGAN: Generating document embeddings for multi-corpus text data",
"Experiments",
"The CNN data set",
"The TIME data set",
"The 20 Newsgroups data set",
"The Reuters-21578 data set",
"Conclusion",
"Reference"
],
"paragraphs": [
[
"Generative adversarial nets (GAN) (Goodfellow et al., 2014) belong to a class of generative models which are trainable and can generate artificial data examples similar to the existing ones. In a GAN model, there are two sub-models simultaneously trained: a generative model INLINEFORM0 from which artificial data examples can be sampled, and a discriminative model INLINEFORM1 which classifies real data examples and artificial ones from INLINEFORM2 . By training INLINEFORM3 to maximize its generation power, and training INLINEFORM4 to minimize the generation power of INLINEFORM5 , so that ideally there will be no difference between the true and artificial examples, a minimax problem can be established. The GAN model has been shown to closely replicate a number of image data sets, such as MNIST, Toronto Face Database (TFD), CIFAR-10, SVHN, and ImageNet (Goodfellow et al., 2014; Salimans et al. 2016).",
"The GAN model has been extended to text data in a number of ways. For instance, Zhang et al. (2016) applied a long-short term memory (Hochreiter and Schmidhuber, 1997) generator and approximated discretization to generate text data. Moreover, Li et al. (2017) applied the GAN model to generate dialogues, i.e. pairs of questions and answers. Meanwhile, the GAN model can also be applied to generate bag-of-words embeddings of text data, which focus more on key terms in a text document rather than the original document itself. Glover (2016) provided such a model with the energy-based GAN (Zhao et al., 2017).",
"To the best of our knowledge, there has been no literature on applying the GAN model to multiple corpora of text data. Multi-class GANs (Liu and Tuzel, 2016; Mirza and Osindero, 2014) have been proposed, but a class in multi-class classification is not the same as multiple corpora. Because knowing the underlying corpus membership of each text document can provide better information on how the text documents are organized, and documents from the same corpus are expected to share similar topics or key words, considering the membership information can benefit the training of a text model from a supervised perspective. We consider two problems associated with training multi-corpus text data: (1) Given a separate set of word embeddings from each corpus, such as the word2vec embeddings (Mikolov et al., 2013), how to obtain a better set of cross-corpus word embeddings from them? (2) How to incorporate the generation of document embeddings from different corpora in a single GAN model?",
"For the first problem, we train a GAN model which discriminates documents represented by different word embeddings, and train the cross-corpus word embedding so that it is similar to each existing word embedding per corpus. For the second problem, we train a GAN model which considers both cross-corpus and per-corpus “topics” in the generator, and applies a discriminator which considers each original and artificial document corpus. We also show that with sufficient training, the distribution of the artificial document embeddings is equivalent to the original ones. Our work has the following contributions: (1) we extend GANs to multiple corpora of text data, (2) we provide applications of GANs to finetune word embeddings and to create robust document embeddings, and (3) we establish theoretical convergence results of the multi-class GAN model.",
"Section 2 reviews existing GAN models related to this paper. Section 3 describes the GAN models on training cross-corpus word embeddings and generating document embeddings for each corpora, and explains the associated algorithms. Section 4 presents the results of the two models on text data sets, and transfers them to supervised learning. Section 5 summarizes the results and concludes the paper."
],
[
"In a GAN model, we assume that the data examples INLINEFORM0 are drawn from a distribution INLINEFORM1 , and the artificial data examples INLINEFORM2 are transformed from the noise distribution INLINEFORM3 . The binary classifier INLINEFORM4 outputs the probability of a data example (or an artificial one) being an original one. We consider the following minimax problem DISPLAYFORM0 ",
"With sufficient training, it is shown in Goodfellow et al. (2014) that the distribution of artificial data examples INLINEFORM0 is eventually equivalent to the data distribution INLINEFORM1 , i.e. INLINEFORM2 .",
"Because the probabilistic structure of a GAN can be unstable to train, the Wasserstein GAN (Arjovsky et al., 2017) is proposed which applies a 1-Lipschitz function as a discriminator. In a Wasserstein GAN, we consider the following minimax problem DISPLAYFORM0 ",
"These GANs are for the general purpose of learning the data distribution in an unsupervised way and creating perturbed data examples resembling the original ones. We note that in many circumstances, data sets are obtained with supervised labels or categories, which can add explanatory power to unsupervised models such as the GAN. We summarize such GANs because a corpus can be potentially treated as a class. The main difference is that classes are purely for the task of classification while we are interested in embeddings that can be used for any supervised or unsupervised task.",
"For instance, the CoGAN (Liu and Tuzel, 2016) considers pairs of data examples from different categories as follows INLINEFORM0 ",
" where the weights of the first few layers of INLINEFORM0 and INLINEFORM1 (i.e. close to INLINEFORM2 ) are tied. Mirza and Osindero (2014) proposed the conditional GAN where the generator INLINEFORM3 and the discriminator INLINEFORM4 depend on the class label INLINEFORM5 . While these GANs generate samples resembling different classes, other variations of GANs apply the class labels for semi-supervised learning. For instance, Salimans et al. (2016) proposed the following objective DISPLAYFORM0 ",
"where INLINEFORM0 has INLINEFORM1 classes plus the INLINEFORM2 -th artificial class. Similar models can be found in Odena (2016), the CatGAN in Springenberg (2016), and the LSGAN in Mao et al. (2017). However, all these models consider only images and do not produce word or document embeddings, therefore being different from our models.",
"For generating real text, Zhang et al. (2016) proposed textGAN in which the generator has the following form, DISPLAYFORM0 ",
"where INLINEFORM0 is the noise vector, INLINEFORM1 is the generated sentence, INLINEFORM2 are the words, and INLINEFORM3 . A uni-dimensional convolutional neural network (Collobert et al, 2011; Kim, 2014) is applied as the discriminator. Also, a weighted softmax function is applied to make the argmax function differentiable. With textGAN, sentences such as “we show the efficacy of our new solvers, making it up to identify the optimal random vector...” can be generated. Similar models can also be found in Wang et al. (2016), Press et al. (2017), and Rajeswar et al. (2017). The focus of our work is to summarize information from longer documents, so we apply document embeddings such as the tf-idf to represent the documents rather than to generate real text.",
"For generating bag-of-words embeddings of text, Glover (2016) proposed the following model DISPLAYFORM0 ",
"and INLINEFORM0 is the mean squared error of a de-noising autoencoder, and INLINEFORM1 is the one-hot word embedding of a document. Our models are different from this model because we consider tf-idf document embeddings for multiple text corpora in the deGAN model (Section 3.2), and weGAN (Section 3.1) can be applied to produce word embeddings. Also, we focus on robustness based on several corpora, while Glover (2016) assumed a single corpus.",
"For extracting word embeddings given text data, Mikolov et al. (2013) proposed the word2vec model, for which there are two variations: the continuous bag-of-words (cBoW) model (Mikolov et al., 2013b), where the neighboring words are used to predict the appearance of each word; the skip-gram model, where each neighboring word is used individually for prediction. In GloVe (Pennington et al., 2013), a bilinear regression model is trained on the log of the word co-occurrence matrix. In these models, the weights associated with each word are used as the embedding. For obtaining document embeddings, the para2vec model (Le and Mikolov, 2014) adds per-paragraph vectors to train word2vec-type models, so that the vectors can be used as embeddings for each paragraph. A simpler approach by taking the average of the embeddings of each word in a document and output the document embedding is exhibited in Socher et al. (2013)."
],
[
"Suppose we have a number of different corpora INLINEFORM0 , which for example can be based on different categories or sentiments of text documents. We suppose that INLINEFORM1 , INLINEFORM2 , where each INLINEFORM3 represents a document. The words in all corpora are collected in a dictionary, and indexed from 1 to INLINEFORM4 . We name the GAN model to train cross-corpus word embeddings as “weGAN,” where “we” stands for “word embeddings,” and the GAN model to generate document embeddings for multiple corpora as “deGAN,” where “de” stands for “document embeddings.”"
],
[
"We assume that for each corpora INLINEFORM0 , we are given word embeddings for each word INLINEFORM1 , where INLINEFORM2 is the dimension of each word embedding. We are also given a classification task on documents that is represented by a parametric model INLINEFORM3 taking document embeddings as feature vectors. We construct a GAN model which combines different sets of word embeddings INLINEFORM4 , INLINEFORM5 , into a single set of word embeddings INLINEFORM6 . Note that INLINEFORM7 are given but INLINEFORM8 is trained. Here we consider INLINEFORM9 as the generator, and the goal of the discriminator is to distinguish documents represented by the original embeddings INLINEFORM10 and the same documents represented by the new embeddings INLINEFORM11 .",
"Next we describe how the documents are represented by a set of embeddings INLINEFORM0 and INLINEFORM1 . For each document INLINEFORM2 , we define its document embedding with INLINEFORM3 as follows, DISPLAYFORM0 ",
"where INLINEFORM0 can be any mapping. Similarly, we define the document embedding of INLINEFORM1 with INLINEFORM2 as follows, with INLINEFORM3 trainable DISPLAYFORM0 ",
"In a typical example, word embeddings would be based on word2vec or GLoVe. Function INLINEFORM0 can be based on tf-idf, i.e. INLINEFORM1 where INLINEFORM2 is the word embedding of the INLINEFORM3 -th word in the INLINEFORM4 -th corpus INLINEFORM5 and INLINEFORM6 is the tf-idf representation of the INLINEFORM7 -th document INLINEFORM8 in the INLINEFORM9 -th corpus INLINEFORM10 .",
"To train the GAN model, we consider the following minimax problem DISPLAYFORM0 ",
"where INLINEFORM0 is a discriminator of whether a document is original or artificial. Here INLINEFORM1 is the label of document INLINEFORM2 with respect to classifier INLINEFORM3 , and INLINEFORM4 is a unit vector with only the INLINEFORM5 -th component being one and all other components being zeros. Note that INLINEFORM6 is equivalent to INLINEFORM7 , but we use the former notation due to its brevity.",
"The intuition of problem (8) is explained as follows. First we consider a discriminator INLINEFORM0 which is a feedforward neural network (FFNN) with binary outcomes, and classifies the document embeddings INLINEFORM1 against the original document embeddings INLINEFORM2 . Discriminator INLINEFORM3 minimizes this classification error, i.e. it maximizes the log-likelihood of INLINEFORM4 having label 0 and INLINEFORM5 having label 1. This corresponds to DISPLAYFORM0 ",
"For the generator INLINEFORM0 , we wish to minimize (8) against INLINEFORM1 so that we can apply the minimax strategy, and the combined word embeddings INLINEFORM2 would resemble each set of word embeddings INLINEFORM3 . Meanwhile, we also consider classifier INLINEFORM4 with INLINEFORM5 outcomes, and associates INLINEFORM6 with label INLINEFORM7 , so that the generator INLINEFORM8 can learn from the document labeling in a semi-supervised way.",
"If the classifier INLINEFORM0 outputs a INLINEFORM1 -dimensional softmax probability vector, we minimize the following against INLINEFORM2 , which corresponds to (8) given INLINEFORM3 and INLINEFORM4 : DISPLAYFORM0 ",
"For the classifier INLINEFORM0 , we also minimize its negative log-likelihood DISPLAYFORM0 ",
"Assembling (9-11) together, we retrieve the original minimax problem (8).",
"We train the discriminator and the classifier, INLINEFORM0 , and the combined embeddings INLINEFORM1 according to (9-11) iteratively for a fixed number of epochs with the stochastic gradient descent algorithm, until the discrimination and classification errors become stable.",
"The algorithm for weGAN is summarized in Algorithm 1, and Figure 1 illustrates the weGAN model.",
"",
" Algorithm 1. Train INLINEFORM0 based on INLINEFORM1 from all corpora INLINEFORM2 . Randomly initialize the weights and biases of the classifier INLINEFORM3 and discriminator INLINEFORM4 . Until maximum number of iterations reached Update INLINEFORM5 and INLINEFORM6 according to (9) and (11) given a mini-batch INLINEFORM7 of training examples INLINEFORM8 . Update INLINEFORM9 according to (10) given a mini-batch INLINEFORM10 of training examples INLINEFORM11 . Output INLINEFORM12 as the cross-corpus word embeddings. ",
"",
"",
"",
"",
"",
"",
""
],
[
"In this section, our goal is to generate document embeddings which would resemble real document embeddings in each corpus INLINEFORM0 , INLINEFORM1 . We construct INLINEFORM2 generators, INLINEFORM3 so that INLINEFORM4 generate artificial examples in corpus INLINEFORM5 . As in Section 3.1, there is a certain document embedding such as tf-idf, bag-of-words, or para2vec. Let INLINEFORM6 . We initialize a noise vector INLINEFORM7 , where INLINEFORM8 , and INLINEFORM9 is any noise distribution.",
"For a generator INLINEFORM0 represented by its parameters, we first map the noise vector INLINEFORM1 to the hidden layer, which represents different topics. We consider two hidden vectors, INLINEFORM2 for general topics and INLINEFORM3 for specific topics per corpus, DISPLAYFORM0 ",
"Here INLINEFORM0 represents a nonlinear activation function. In this model, the bias term can be ignored in order to prevent the “mode collapse” problem of the generator. Having the hidden vectors, we then map them to the generated document embedding with another activation function INLINEFORM1 , DISPLAYFORM0 ",
"To summarize, we may represent the process from noise to the document embedding as follows, DISPLAYFORM0 ",
"Given the generated document embeddings INLINEFORM0 , we consider the following minimax problem to train the generator INLINEFORM1 and the discriminator INLINEFORM2 : INLINEFORM3 INLINEFORM4 ",
"Here we assume that any document embedding INLINEFORM0 in corpus INLINEFORM1 is a sample with respect to the probability density INLINEFORM2 . Note that when INLINEFORM3 , the discriminator part of our model is equivalent to the original GAN model.",
"To explain (15), first we consider the discriminator INLINEFORM0 . Because there are multiple corpora of text documents, here we consider INLINEFORM1 categories as output of INLINEFORM2 , from which categories INLINEFORM3 represent the original corpora INLINEFORM4 , and categories INLINEFORM5 represent the generated document embeddings (e.g. bag-of-words) from INLINEFORM6 . Assume the discriminator INLINEFORM7 , a feedforward neural network, outputs the distribution of a text document being in each category. We maximize the log-likelihood of each document being in the correct category against INLINEFORM8 DISPLAYFORM0 ",
"Such a classifier does not only classifies text documents into different categories, but also considers INLINEFORM0 “fake” categories from the generators. When training the generators INLINEFORM1 , we minimize the following which makes a comparison between the INLINEFORM2 -th and INLINEFORM3 -th categories DISPLAYFORM0 ",
"The intuition of (17) is that for each generated document embedding INLINEFORM0 , we need to decrease INLINEFORM1 , which is the probability of the generated embedding being correctly classified, and increase INLINEFORM2 , which is the probability of the generated embedding being classified into the target corpus INLINEFORM3 . The ratio in (17) reflects these two properties.",
"We iteratively train (16) and (17) until the classification error of INLINEFORM0 becomes stable. The algorithm for deGAN is summarized in Algorithm 2, and Figure 2 illustrates the deGAN model..",
"",
" Algorithm 2. Randomly initialize the weights of INLINEFORM0 . Initialize the discriminator INLINEFORM1 with the weights of the first layer (which takes document embeddings as the input) initialized by word embeddings, and other parameters randomly initialized. Until maximum number of iterations reached Update INLINEFORM2 according to (16) given a mini-batch of training examples INLINEFORM3 and samples from noise INLINEFORM4 . Update INLINEFORM5 according to (17) given a mini-batch of training examples INLINEFORM6 and samples form noise INLINEFORM7 . Output INLINEFORM8 as generators of document embeddings and INLINEFORM9 as a corpus classifier. ",
"",
"",
"",
"",
"",
"",
"",
"We next show that from (15), the distributions of the document embeddings from the optimal INLINEFORM0 are equal to the data distributions of INLINEFORM1 , which is a generalization of Goodfellow et al. (2014) to the multi-corpus scenario.",
"Proposition 1. Let us assume that the random variables INLINEFORM0 are continuous with probability density INLINEFORM1 which have bounded support INLINEFORM2 ; INLINEFORM3 is a continuous random variable with bounded support and activations INLINEFORM4 and INLINEFORM5 are continuous; and that INLINEFORM6 are solutions to (15). Then INLINEFORM7 , the probability density of the document embeddings from INLINEFORM8 , INLINEFORM9 , are equal to INLINEFORM10 .",
"Proof. Since INLINEFORM0 is bounded, all of the integrals exhibited next are well-defined and finite. Since INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 are continuous, it follows that for any parameters, INLINEFORM4 is a continuous random variable with probability density INLINEFORM5 with finite support.",
"From the first line of (15), INLINEFORM0 ",
" This problem reduces to INLINEFORM0 subject to INLINEFORM1 , the solution of which is INLINEFORM2 , INLINEFORM3 . Therefore, the solution to (18) is DISPLAYFORM0 ",
"We then obtain from the second line of (15) that INLINEFORM0 ",
" From non-negativity of the Kullback-Leibler divergence, we conclude that INLINEFORM0 "
],
[
"In the experiments, we consider four data sets, two of them newly created and the remaining two already public: CNN, TIME, 20 Newsgroups, and Reuters-21578. The code and the two new data sets are available at github.com/baiyangwang/emgan. For the pre-processing of all the documents, we transformed all characters to lower case, stemmed the documents, and ran the word2vec model on each corpora to obtain word embeddings with a size of 300. In all subsequent models, we only consider the most frequent INLINEFORM0 words across all corpora in a data set.",
"The document embedding in weGAN is the tf-idf weighted word embedding transformed by the INLINEFORM0 activation, i.e. DISPLAYFORM0 ",
"For deGAN, we use INLINEFORM0 -normalized tf-idf as the document embedding because it is easier to interpret than the transformed embedding in (20).",
"For weGAN, the cross-corpus word embeddings are initialized with the word2vec model trained from all documents. For training our models, we apply a learning rate which increases linearly from INLINEFORM0 to INLINEFORM1 and train the models for 100 epochs with a batch size of 50 per corpus. The classifier INLINEFORM2 has a single hidden layer with 50 hidden nodes, and the discriminator with a single hidden layer INLINEFORM3 has 10 hidden nodes. All these parameters have been optimized. For the labels INLINEFORM4 in (8), we apply corpus membership of each document.",
"For the noise distribution INLINEFORM0 for deGAN, we apply the uniform distribution INLINEFORM1 . In (14) for deGAN, INLINEFORM2 and INLINEFORM3 so that the model outputs document embedding vectors which are comparable to INLINEFORM4 -normalized tf-idf vectors for each document. For the discriminator INLINEFORM5 of deGAN, we apply the word2vec embeddings based on all corpora to initialize its first layer, followed by another hidden layer of 50 nodes. For the discriminator INLINEFORM6 , we apply a learning rate of INLINEFORM7 , and for the generator INLINEFORM8 , we apply a learning rate of INLINEFORM9 , because the initial training phase of deGAN can be unstable. We also apply a batch size of 50 per corpus. For the softmax layers of deGAN, we initialize them with the log of the topic-word matrix in latent Dirichlet allocation (LDA) (Blei et al., 2003) in order to provide intuitive estimates.",
"For weGAN, we consider two metrics for comparing the embeddings trained from weGAN and those trained from all documents: (1) applying the document embeddings to cluster the documents into INLINEFORM0 clusters with the K-means algorithm, and calculating the Rand index (RI) (Rand, 1971) against the original corpus membership; (2) finetuning the classifier INLINEFORM1 and comparing the classification error against an FFNN of the same structure initialized with word2vec (w2v). For deGAN, we compare the performance of finetuning the discriminator of deGAN for document classification, and the performance of the same FFNN. Each supervised model is trained for 500 epochs and the validation data set is used to choose the best epoch."
],
[
"In the CNN data set, we collected all news links on www.cnn.com in the GDELT 1.0 Event Database from April 1st, 2013 to July 7, 2017. We then collected the news articles from the links, and kept those belonging to the three largest categories: “politics,” “world,” and “US.” We then divided these documents into INLINEFORM0 training documents, from which INLINEFORM1 validation documents are held out, and INLINEFORM2 testing documents.",
"We hypothesize that because weGAN takes into account document labels in a semi-supervised way, the embeddings trained from weGAN can better incorporate the labeling information and therefore, produce document embeddings which are better separated. The results are shown in Table 1 and averaged over 5 randomized runs. Performing the Welch's t-test, both changes after weGAN training are statistically significant at a INLINEFORM0 significance level. Because the Rand index captures matching accuracy, we observe from the Table 1 that weGAN tends to improve both metrics.",
" Meanwhile, we also wish to observe the spatial structure of the trained embeddings, which can be explored by the synonyms of each word measured by the cosine similarity. On average, the top 10 synonyms of each word differ by INLINEFORM0 word after weGAN training, and INLINEFORM1 of all words have different top 10 synonyms after training. Therefore, weGAN tends to provide small adjustments rather than structural changes. Table 2 lists the 10 most similar terms of three terms, “Obama,” “Trump,” and “U.S.,” before and after weGAN training, ordered by cosine similarity.",
"",
"We observe from Table 2 that for “Obama,” ”Trump” and “Tillerson” are more similar after weGAN training, which means that the structure of the weGAN embeddings can be more up-to-date. For “Trump,” we observe that “Clinton” is not among the synonyms before, but is after, which shows that the synonyms after are more relevant. For “U.S.,” we observe that after training, “American” replaces “British” in the list of synonyms, which is also more relevant.",
"We next discuss deGAN. In Table 3, we compare the performance of finetuning the discriminator of deGAN for document classification, and the performance of the FFNN initialized with word2vec. The change is also statistically significant at the INLINEFORM0 level. From Table 3, we observe that deGAN improves the accuracy of supervised learning.",
"",
"To compare the generated samples from deGAN with the original bag-of-words, we randomly select one record in each original and artificial corpus. The records are represented by the most frequent words sorted by frequency in descending order where the stop words are removed. The bag-of-words embeddings are shown in Table 4.",
"",
"From Table 4, we observe that the bag-of-words embeddings of the original documents tend to contain more name entities, while those of the artificial deGAN documents tend to be more general. There are many additional examples not shown here with observed artificial bag-of-words embeddings having many name entities such as “Turkey,” “ISIS,” etc. from generated documents, e.g. “Syria eventually ISIS U.S. details jet aircraft October video extremist...”",
"We also perform dimensional reduction using t-SNE (van der Maaten and Hinton, 2008), and plot 100 random samples from each original or artificial category. The original samples are shown in red and the generated ones are shown in blue in Figure 3. We do not further distinguish the categories because there is no clear distinction between the three original corpora, “politics,” “world,” and “US.” The results are shown in Figure 3.",
"We observe that the original and artificial examples are generally mixed together and not well separable, which means that the artificial examples are similar to the original ones. However, we also observe that the artificial samples tend to be more centered and have no outliers (represented by the outermost red oval)."
],
[
"In the TIME data set, we collected all news links on time.com in the GDELT 1.0 Event Database from April 1st, 2013 to July 7, 2017. We then collected the news articles from the links, and kept those belonging to the five largest categories: “Entertainment,” “Ideas,” “Politics,” “US,” and “World.” We divided these documents into INLINEFORM0 training documents, from which INLINEFORM1 validation documents are held out, and INLINEFORM2 testing documents.",
"Table 5 compares the clustering results of word2vec and weGAN, and the classification accuracy of an FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. The results in Table 5 are the counterparts of Table 1 and Table 3 for the TIME data set. The differences are also significant at the INLINEFORM0 level.",
"",
"From Table 5, we observe that both GAN models yield improved performance of supervised learning. For weGAN, on an average, the top 10 synonyms of each word differ by INLINEFORM0 word after weGAN training, and INLINEFORM1 of all words have different top 10 synonyms after training. We also compare the synonyms of the same common words, “Obama,” “Trump,” and “U.S.,” which are listed in Table 6.",
"",
"In the TIME data set, for “Obama,” “Reagan” is ranked slightly higher as an American president. For “Trump,” “Bush” and “Sanders” are ranked higher as American presidents or candidates. For “U.S.,” we note that “Pentagon” is ranked higher after weGAN training, which we think is also reasonable because the term is closely related to the U.S. government.",
"For deGAN, we also compare the original and artificial samples in terms of the highest probability words. Table 7 shows one record for each category.",
"",
"From Table 7, we observe that the produced bag-of-words are generally alike, and the words in the same sample are related to each other to some extent.",
"We also perform dimensional reduction using t-SNE for 100 examples per corpus and plot them in Figure 4. We observe that the points are generated mixed but deGAN cannot reproduce the outliers."
],
[
"The 20 Newsgroups data set is a collection of news documents with 20 categories. To reduce the number of categories so that the GAN models are more compact and have more samples per corpus, we grouped the documents into 6 super-categories: “religion,” “computer,” “cars,” “sport,” “science,” and “politics” (“misc” is ignored because of its noisiness). We considered each super-category as a different corpora. We then divided these documents into INLINEFORM0 training documents, from which INLINEFORM1 validation documents are held out, and INLINEFORM2 testing documents. We train weGAN and deGAN in the the beginning of Section 4, except that we use a learning rate of INLINEFORM3 for the discriminator in deGAN to stabilize the cost function. Table 8 compares the clustering results of word2vec and weGAN, and the classification accuracy of the FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. All comparisons are statistically significant at the INLINEFORM4 level. The other results are similar to the previous two data sets and are thereby omitted here.",
""
],
[
"The Reuters-21578 data set is a collection of newswire articles. Because the data set is highly skewed, we considered the eight categories with more than 100 training documents: “earn,” “acq,” “crude,” “trade,” “money-fx,” “interest,” “money-supply,” and “ship.” We then divided these documents into INLINEFORM0 training documents, from which 692 validation documents are held out, and INLINEFORM1 testing documents. We train weGAN and deGAN in the same way as in the 20 Newsgroups data set. Table 9 compares the clustering results of word2vec and weGAN, and the classification accuracy of the FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. All comparisons are statistically significant at the INLINEFORM2 level except the Rand index. The other results are similar to the CNN and TIME data sets and are thereby omitted here.",
""
],
[
"In this paper, we have demonstrated the application of the GAN model on text data with multiple corpora. We have shown that the GAN model is not only able to generate images, but also able to refine word embeddings and generate document embeddings. Such models can better learn the inner structure of multi-corpus text data, and also benefit supervised learning. The improvements in supervised learning are not large but statistically significant. The weGAN model outperforms deGAN in terms of supervised learning for 3 out of 4 data sets, and is thereby recommended. The synonyms from weGAN also tend to be more relevant than the original word2vec model. The t-SNE plots show that our generated document embeddings are similarly distributed as the original ones."
],
[
"M. Arjovsky, S. Chintala, and L. Bottou. (2017). Wasserstein GAN. arXiv:1701.07875.",
"D. Blei, A. Ng, and M. Jordan. (2003). Latent Dirichlet Allocation. Journal of Machine Learning Research. 3:993-1022.",
"R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. (2011). Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research. 12:2493-2537.",
"I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. (2014). Generative Adversarial Nets. In Advances in Neural Information Processing Systems 27 (NIPS 2014).",
"J. Glover. (2016). Modeling documents with Generative Adversarial Networks. In Workshop on Adversarial Training (NIPS 2016).",
"S. Hochreiter and J. Schmidhuber. (1997). Long Short-term Memory. In Neural Computation, 9:1735-1780.",
"Y. Kim. Convolutional Neural Networks for Sentence Classification. (2014). In The 2014 Conference on Empirical Methods on Natural Language Processing (EMNLP 2014).",
"Q. Le and T. Mikolov. (2014). Distributed Representations of Sentences and Documents. In Proceedings of the 31st International Conference on Machine Learning (ICML 2014).",
"J. Li, W. Monroe, T. Shi, A. Ritter, and D. Jurafsky. (2017). Adversarial Learning for Neural Dialogue Generation. arXiv:1701.06547.",
"M.-Y. Liu, and O. Tuzel. (2016). Coupled Generative Adversarial Networks. In Advances in Neural Information Processing Systems 29 (NIPS 2016).",
"X. Mao, Q. Li, H. Xie, R. Lau, Z. Wang, and S. Smolley. (2017). Least Squares Generative Adversarial Networks. arXiv:1611.04076.",
"T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. (2013). Distributed Embeddings of Words and Phrases and Their Compositionality. In Advances in Neural Information Processing Systems 26 (NIPS 2013).",
"T. Mikolov, K. Chen, G. Corrado, and J. Dean. (2013b). Efficient Estimation of Word Representations in Vector Space. In Workshop (ICLR 2013).",
"M. Mirza, S. Osindero. (2014). Conditional Generative Adversarial Nets. arXiv:1411.1784.",
"A. Odena. (2016). Semi-supervised Learning with Generative Adversarial Networks. arXiv:1606. 01583.",
"J. Pennington, R. Socher, and C. Manning. Glove: Global vectors for word representation. (2014). In Empirical Methods in Natural Language Processing (EMNLP 2014).",
"O. Press, A. Bar, B. Bogin, J. Berant, and L. Wolf. (2017). Language Generation with Recurrent Generative Adversarial Networks without Pre-training. In 1st Workshop on Subword and Character level models in NLP (EMNLP 2017).",
"S. Rajeswar, S. Subramanian, F. Dutil, C. Pal, and A. Courville. (2017). Adversarial Generation of Natural Language. arXiv:1705.10929.",
"W. Rand. (1971). Objective Criteria for the Evaluation of Clustering Methods. Journal of the American Statistical Association, 66:846-850.",
"T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen. (2016). Improved Techniques for Training GANs. In Advances in Neural Information Processing Systems 29 (NIPS 2016).",
"R. Socher, A. Perelygin, Alex, J. Wu, J. Chuang, C. Manning, A. Ng, and C. Potts. (2013) Recursive deep models for semantic compositionality over a sentiment treebank. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2013).",
"J. Springenberg. (2016). Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks. In 4th International Conference on Learning embeddings (ICLR 2016).",
"L. van der Maaten, and G. Hinton. (2008). Visualizing Data using t-SNE. Journal of Machine Learning Research, 9:2579-2605.",
"B. Wang, K. Liu, and J. Zhao. (2016). Conditional Generative Adversarial Networks for Commonsense Machine Comprehension. In Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17).",
"Y. Zhang, Z. Gan, and L. Carin. (2016). Generating Text via Adversarial Training. In Workshop on Adversarial Training (NIPS 2016).",
"J. Zhao, M. Mathieu, and Y. LeCun. (2017). Energy-based Generative Adversarial Networks. In 5th International Conference on Learning embeddings (ICLR 2017)."
]
]
} | {
"question": [
"Which GAN do they use?",
"Do they evaluate grammaticality of generated text?",
"Which corpora do they use?"
],
"question_id": [
"6548db45fc28e8a8b51f114635bad14a13eaec5b",
"4c4f76837d1329835df88b0921f4fe8bda26606f",
"819d2e97f54afcc7cdb3d894a072bcadfba9b747"
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"We construct a GAN model which combines different sets of word embeddings INLINEFORM4 , INLINEFORM5 , into a single set of word embeddings INLINEFORM6 . "
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We assume that for each corpora INLINEFORM0 , we are given word embeddings for each word INLINEFORM1 , where INLINEFORM2 is the dimension of each word embedding. We are also given a classification task on documents that is represented by a parametric model INLINEFORM3 taking document embeddings as feature vectors. We construct a GAN model which combines different sets of word embeddings INLINEFORM4 , INLINEFORM5 , into a single set of word embeddings INLINEFORM6 . Note that INLINEFORM7 are given but INLINEFORM8 is trained. Here we consider INLINEFORM9 as the generator, and the goal of the discriminator is to distinguish documents represented by the original embeddings INLINEFORM10 and the same documents represented by the new embeddings INLINEFORM11 ."
],
"highlighted_evidence": [
"We assume that for each corpora INLINEFORM0 , we are given word embeddings for each word INLINEFORM1 , where INLINEFORM2 is the dimension of each word embedding. We are also given a classification task on documents that is represented by a parametric model INLINEFORM3 taking document embeddings as feature vectors. We construct a GAN model which combines different sets of word embeddings INLINEFORM4 , INLINEFORM5 , into a single set of word embeddings INLINEFORM6 . Note that INLINEFORM7 are given but INLINEFORM8 is trained. Here we consider INLINEFORM9 as the generator, and the goal of the discriminator is to distinguish documents represented by the original embeddings INLINEFORM10 and the same documents represented by the new embeddings INLINEFORM11 ."
]
},
{
"unanswerable": false,
"extractive_spans": [
"weGAN",
"deGAN"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Suppose we have a number of different corpora INLINEFORM0 , which for example can be based on different categories or sentiments of text documents. We suppose that INLINEFORM1 , INLINEFORM2 , where each INLINEFORM3 represents a document. The words in all corpora are collected in a dictionary, and indexed from 1 to INLINEFORM4 . We name the GAN model to train cross-corpus word embeddings as “weGAN,” where “we” stands for “word embeddings,” and the GAN model to generate document embeddings for multiple corpora as “deGAN,” where “de” stands for “document embeddings.”"
],
"highlighted_evidence": [
"We name the GAN model to train cross-corpus word embeddings as “weGAN,” where “we” stands for “word embeddings,” and the GAN model to generate document embeddings for multiple corpora as “deGAN,” where “de” stands for “document embeddings.”"
]
}
],
"annotation_id": [
"544fe9ca42dab45cdbc085a5ba35c5f5a543ac46",
"ca3d3530b5cba547a09ca34495db2fd0e2fb5a3e"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"We hypothesize that because weGAN takes into account document labels in a semi-supervised way, the embeddings trained from weGAN can better incorporate the labeling information and therefore, produce document embeddings which are better separated. The results are shown in Table 1 and averaged over 5 randomized runs. Performing the Welch's t-test, both changes after weGAN training are statistically significant at a INLINEFORM0 significance level. Because the Rand index captures matching accuracy, we observe from the Table 1 that weGAN tends to improve both metrics."
],
"highlighted_evidence": [
"Performing the Welch's t-test, both changes after weGAN training are statistically significant at a INLINEFORM0 significance level. "
]
}
],
"annotation_id": [
"1eb616b54e8fc48a4d3377a611d6e227755c5035"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"CNN, TIME, 20 Newsgroups, and Reuters-21578"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In the experiments, we consider four data sets, two of them newly created and the remaining two already public: CNN, TIME, 20 Newsgroups, and Reuters-21578. The code and the two new data sets are available at github.com/baiyangwang/emgan. For the pre-processing of all the documents, we transformed all characters to lower case, stemmed the documents, and ran the word2vec model on each corpora to obtain word embeddings with a size of 300. In all subsequent models, we only consider the most frequent INLINEFORM0 words across all corpora in a data set."
],
"highlighted_evidence": [
"In the experiments, we consider four data sets, two of them newly created and the remaining two already public: CNN, TIME, 20 Newsgroups, and Reuters-21578. The code and the two new data sets are available at github.com/baiyangwang/emgan. For the pre-processing of all the documents, we transformed all characters to lower case, stemmed the documents, and ran the word2vec model on each corpora to obtain word embeddings with a size of 300. In all subsequent models, we only consider the most frequent INLINEFORM0 words across all corpora in a data set."
]
}
],
"annotation_id": [
"c8550faf7a4f5b35e2b85a8b98bf0de7ec3d8473"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
]
} | {
"caption": [
"Figure 1: Model structure of weGAN.",
"Figure 2: Model structure of deGAN.",
"Table 1: A comparison between word2vec and weGAN in terms of the Rand index and the classification accuracy for the CNN data set.",
"Table 3: A comparison between word2vec and deGAN in terms of the accuracy for the CNN data set.",
"Figure 3: 2-d representation of original (red) and artificial (blue) examples in the CNN data set.",
"Table 4: Bag-of-words representations of original and artificial text in the CNN data set.",
"Table 5: A comparison between word2vec, weGAN, and deGAN in terms of the Rand index and the classification accuracy for the TIME data set.",
"Table 6: Synonyms of “Obama,” “Trump,” and “U.S.” before and after weGAN training for the TIME data set.",
"Table 7: Bag-of-words representations of original and artificial text in the TIME data set.",
"Figure 4: 2-d representation of original (red) and artificial (blue) examples in the TIME data set.",
"Table 8: A comparison between word2vec, weGAN, and deGAN in terms of the Rand index and the classification accuracy for the 20 Newsgroups data set.",
"Table 9: A comparison between word2vec, weGAN, and deGAN in terms of the Rand index and the classification accuracy for the Reuters-21578 data set."
],
"file": [
"6-Figure1-1.png",
"8-Figure2-1.png",
"10-Table1-1.png",
"11-Table3-1.png",
"11-Figure3-1.png",
"11-Table4-1.png",
"12-Table5-1.png",
"12-Table6-1.png",
"13-Table7-1.png",
"13-Figure4-1.png",
"14-Table8-1.png",
"14-Table9-1.png"
]
} |
2001.00137 | Stacked DeBERT: All Attention in Incomplete Data for Text Classification | In this paper, we propose Stacked DeBERT, short for Stacked Denoising Bidirectional Encoder Representations from Transformers. This novel model improves robustness in incomplete data, when compared to existing systems, by designing a novel encoding scheme in BERT, a powerful language representation model solely based on attention mechanisms. Incomplete data in natural language processing refer to text with missing or incorrect words, and its presence can hinder the performance of current models that were not implemented to withstand such noises, but must still perform well even under duress. This is due to the fact that current approaches are built for and trained with clean and complete data, and thus are not able to extract features that can adequately represent incomplete data. Our proposed approach consists of obtaining intermediate input representations by applying an embedding layer to the input tokens followed by vanilla transformers. These intermediate features are given as input to novel denoising transformers which are responsible for obtaining richer input representations. The proposed approach takes advantage of stacks of multilayer perceptrons for the reconstruction of missing words' embeddings by extracting more abstract and meaningful hidden feature vectors, and bidirectional transformers for improved embedding representation. We consider two datasets for training and evaluation: the Chatbot Natural Language Understanding Evaluation Corpus and Kaggle's Twitter Sentiment Corpus. Our model shows improved F1-scores and better robustness in informal/incorrect texts present in tweets and in texts with Speech-to-Text error in the sentiment and intent classification tasks. | {
"section_name": [
"Introduction",
"Proposed model",
"Dataset ::: Twitter Sentiment Classification",
"Dataset ::: Intent Classification from Text with STT Error",
"Experiments ::: Baseline models",
"Experiments ::: Baseline models ::: NLU service platforms",
"Experiments ::: Baseline models ::: Semantic hashing with classifier",
"Experiments ::: Training specifications",
"Experiments ::: Training specifications ::: NLU service platforms",
"Experiments ::: Training specifications ::: Semantic hashing with classifier",
"Experiments ::: Training specifications ::: BERT",
"Experiments ::: Training specifications ::: Stacked DeBERT",
"Experiments ::: Results on Sentiment Classification from Incorrect Text",
"Experiments ::: Results on Intent Classification from Text with STT Error",
"Conclusion",
"Acknowledgments"
],
"paragraphs": [
[
"Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This scenario is likely to happen when one considers human error done in writing. In fact, it is rather naive to assume that users will always type fully grammatically correct sentences. Panko BIBREF0 goes as far as claiming that human accuracy regarding research paper writing is none when considering the entire document. This has been aggravated with the advent of internet and social networks, which allowed language and modern communication to be been rapidly transformed BIBREF1, BIBREF2. Take Twitter for instance, where information is expected to be readily communicated in short and concise sentences with little to no regard to correct sentence grammar or word spelling BIBREF3.",
"Further motivation can be found in Automatic Speech Recognition (ASR) applications, where high error rates prevail and pose an enormous hurdle in the broad adoption of speech technology by users worldwide BIBREF4. This is an important issue to tackle because, in addition to more widespread user adoption, improving Speech-to-Text (STT) accuracy diminishes error propagation to modules using the recognized text. With that in mind, in order for current systems to improve the quality of their services, there is a need for development of robust intelligent systems that are able to understand a user even when faced with incomplete representation in language.",
"The advancement of deep neural networks have immensely aided in the development of the Natural Language Processing (NLP) domain. Tasks such as text generation, sentence correction, image captioning and text classification, have been possible via models such as Convolutional Neural Networks and Recurrent Neural Networks BIBREF5, BIBREF6, BIBREF7. More recently, state-of-the-art results have been achieved with attention models, more specifically Transformers BIBREF8. Surprisingly, however, there is currently no research on incomplete text classification in the NLP community. Realizing the need of research in that area, we make it the focus of this paper. In this novel task, the model aims to identify the user's intent or sentiment by analyzing a sentence with missing and/or incorrect words. In the sentiment classification task, the model aims to identify the user's sentiment given a tweet, written in informal language and without regards for sentence correctness.",
"Current approaches for Text Classification tasks focus on efficient embedding representations. Kim et al. BIBREF9 use semantically enriched word embeddings to make synonym and antonym word vectors respectively more and less similar in order to improve intent classification performance. Devlin et al. BIBREF10 propose Bidirectional Encoder Representations from Transformers (BERT), a powerful bidirectional language representation model based on Transformers, achieving state-of-the-art results on eleven NLP tasks BIBREF11, including sentiment text classification. Concurrently, Shridhar et al. BIBREF12 also reach state of the art in the intent recognition task using Semantic Hashing for feature representation followed by a neural classifier. All aforementioned approaches are, however, applied to datasets based solely on complete data.",
"The incomplete data problem is usually approached as a reconstruction or imputation task and is most often related to missing numbers imputation BIBREF13. Vincent et al. BIBREF14, BIBREF15 propose to reconstruct clean data from its noisy version by mapping the input to meaningful representations. This approach has also been shown to outperform other models, such as predictive mean matching, random forest, Support Vector Machine (SVM) and Multiple imputation by Chained Equations (MICE), at missing data imputation tasks BIBREF16, BIBREF17. Researchers in those two areas have shown that meaningful feature representation of data is of utter importance for high performance achieving methods. We propose a model that combines the power of BERT in the NLP domain and the strength of denoising strategies in incomplete data reconstruction to tackle the tasks of incomplete intent and sentiment classification. This enables the implementation of a novel encoding scheme, more robust to incomplete data, called Stacked Denoising BERT or Stacked DeBERT. Our approach consists of obtaining richer input representations from input tokens by stacking denoising transformers on an embedding layer with vanilla transformers. The embedding layer and vanilla transformers extract intermediate input features from the input tokens, and the denoising transformers are responsible for obtaining richer input representations from them. By improving BERT with stronger denoising abilities, we are able to reconstruct missing and incorrect words' embeddings and improve classification accuracy. To summarize, our contribution is two-fold:",
"Novel model architecture that is more robust to incomplete data, including missing or incorrect words in text.",
"Proposal of the novel tasks of incomplete intent and sentiment classification from incorrect sentences, and release of corpora related with these tasks.",
"The remainder of this paper is organized in four sections, with Section SECREF2 explaining the proposed model. This is followed by Section SECREF3 which includes a detailed description of the dataset used for training and evaluation purposes and how it was obtained. Section SECREF4 covers the baseline models used for comparison, training specifications and experimental results. Finally, Section SECREF5 wraps up this paper with conclusion and future works."
],
[
"We propose Stacked Denoising BERT (DeBERT) as a novel encoding scheming for the task of incomplete intent classification and sentiment classification from incorrect sentences, such as tweets and text with STT error. The proposed model, illustrated in Fig. FIGREF4, is structured as a stacking of embedding layers and vanilla transformer layers, similarly to the conventional BERT BIBREF10, followed by layers of novel denoising transformers. The main purpose of this model is to improve the robustness and efficiency of BERT when applied to incomplete data by reconstructing hidden embeddings from sentences with missing words. By reconstructing these hidden embeddings, we are able to improve the encoding scheme in BERT.",
"The initial part of the model is the conventional BERT, a multi-layer bidirectional Transformer encoder and a powerful language model. During training, BERT is fine-tuned on the incomplete text classification corpus (see Section SECREF3). The first layer pre-processes the input sentence by making it lower-case and by tokenizing it. It also prefixes the sequence of tokens with a special character `[CLS]' and sufixes each sentence with a `[SEP]' character. It is followed by an embedding layer used for input representation, with the final input embedding being a sum of token embedddings, segmentation embeddings and position embeddings. The first one, token embedding layer, uses a vocabulary dictionary to convert each token into a more representative embedding. The segmentation embedding layer indicates which tokens constitute a sentence by signaling either 1 or 0. In our case, since our data are formed of single sentences, the segment is 1 until the first `[SEP]' character appears (indicating segment A) and then it becomes 0 (segment B). The position embedding layer, as the name indicates, adds information related to the token's position in the sentence. This prepares the data to be considered by the layers of vanilla bidirectional transformers, which outputs a hidden embedding that can be used by our novel layers of denoising transformers.",
"Although BERT has shown to perform better than other baseline models when handling incomplete data, it is still not enough to completely and efficiently handle such data. Because of that, there is a need for further improvement of the hidden feature vectors obtained from sentences with missing words. With this purpose in mind, we implement a novel encoding scheme consisting of denoising transformers, which is composed of stacks of multilayer perceptrons for the reconstruction of missing words’ embeddings by extracting more abstract and meaningful hidden feature vectors, and bidirectional transformers for improved embedding representation. The embedding reconstruction step is trained on sentence embeddings extracted from incomplete data $h_{inc}$ as input and embeddings corresponding to its complete version $h_{comp}$ as target. Both input and target are obtained after applying the embedding layers and the vanilla transformers, as indicated in Fig. FIGREF4, and have shape $(N_{bs}, 768, 128)$, where $N_{bs}$ is the batch size, 768 is the original BERT embedding size for a single token, and 128 is the maximum sequence length in a sentence.",
"The stacks of multilayer perceptrons are structured as two sets of three layers with two hidden layers each. The first set is responsible for compressing the $h_{inc}$ into a latent-space representation, extracting more abstract features into lower dimension vectors $z_1$, $z_2$ and $\\mathbf {z}$ with shape $(N_{bs}, 128, 128)$, $(N_{bs}, 32, 128)$, and $(N_{bs}, 12, 128)$, respectively. This process is shown in Eq. (DISPLAY_FORM5):",
"where $f(\\cdot )$ is the parameterized function mapping $h_{inc}$ to the hidden state $\\mathbf {z}$. The second set then respectively reconstructs $z_1$, $z_2$ and $\\mathbf {z}$ into $h_{rec_1}$, $h_{rec_2}$ and $h_{rec}$. This process is shown in Eq. (DISPLAY_FORM6):",
"where $g(\\cdot )$ is the parameterized function that reconstructs $\\mathbf {z}$ as $h_{rec}$.",
"The reconstructed hidden sentence embedding $h_{rec}$ is compared with the complete hidden sentence embedding $h_{comp}$ through a mean square error loss function, as shown in Eq. (DISPLAY_FORM7):",
"After reconstructing the correct hidden embeddings from the incomplete sentences, the correct hidden embeddings are given to bidirectional transformers to generate input representations. The model is then fine-tuned in an end-to-end manner on the incomplete text classification corpus.",
"Classification is done with a feedforward network and softmax activation function. Softmax $\\sigma $ is a discrete probability distribution function for $N_C$ classes, with the sum of the classes probability being 1 and the maximum value being the predicted class. The predicted class can be mathematically calculated as in Eq. (DISPLAY_FORM8):",
"where $o = W t + b$, the output of the feedforward layer used for classification."
],
[
"In order to evaluate the performance of our model, we need access to a naturally noisy dataset with real human errors. Poor quality texts obtained from Twitter, called tweets, are then ideal for our task. For this reason, we choose Kaggle's two-class Sentiment140 dataset BIBREF18, which consists of spoken text being used in writing and without strong consideration for grammar or sentence correctness. Thus, it has many mistakes, as specified in Table TABREF11.",
"Even though this corpus has incorrect sentences and their emotional labels, they lack their respective corrected sentences, necessary for the training of our model. In order to obtain this missing information, we outsource native English speakers from an unbiased and anonymous platform, called Amazon Mechanical Turk (MTurk) BIBREF19, which is a paid marketplace for Human Intelligence Tasks (HITs). We use this platform to create tasks for native English speakers to format the original incorrect tweets into correct sentences. Some examples are shown in Table TABREF12.",
"After obtaining the correct sentences, our two-class dataset has class distribution as shown in Table TABREF14. There are 200 sentences used in the training stage, with 100 belonging to the positive sentiment class and 100 to the negative class, and 50 samples being used in the evaluation stage, with 25 negative and 25 positive. This totals in 300 samples, with incorrect and correct sentences combined. Since our goal is to evaluate the model's performance and robustness in the presence of noise, we only consider incorrect data in the testing phase. Note that BERT is a pre-trained model, meaning that small amounts of data are enough for appropriate fine-tuning."
],
[
"In the intent classification task, we are presented with a corpus that suffers from the opposite problem of the Twitter sentiment classification corpus. In the intent classification corpus, we have the complete sentences and intent labels but lack their corresponding incomplete sentences, and since our task revolves around text classification in incomplete or incorrect data, it is essential that we obtain this information. To remedy this issue, we apply a Text-to-Speech (TTS) module followed by a Speech-to-Text (STT) module to the complete sentences in order to obtain incomplete sentences with STT error. Due to TTS and STT modules available being imperfect, the resulting sentences have a reasonable level of noise in the form of missing or incorrectly transcribed words. Analysis on this dataset adds value to our work by enabling evaluation of our model's robustness to different rates of data incompleteness.",
"The dataset used to evaluate the models' performance is the Chatbot Natural Language Unerstanding (NLU) Evaluation Corpus, introduced by Braun et al. BIBREF20 to test NLU services. It is a publicly available benchmark and is composed of sentences obtained from a German Telegram chatbot used to answer questions about public transport connections. The dataset has two intents, namely Departure Time and Find Connection with 100 train and 106 test samples, shown in Table TABREF18. Even though English is the main language of the benchmark, this dataset contains a few German station and street names.",
"The incomplete dataset used for training is composed of lower-cased incomplete data obtained by manipulating the original corpora. The incomplete sentences with STT error are obtained in a 2-step process shown in Fig. FIGREF22. The first step is to apply a TTS module to the available complete sentence. Here, we apply gtts , a Google Text-to-Speech python library, and macsay , a terminal command available in Mac OS as say. The second step consists of applying an STT module to the obtained audio files in order to obtain text containing STT errors. The STT module used here was witai , freely available and maintained by Wit.ai. The mentioned TTS and STT modules were chosen according to code availability and whether it's freely available or has high daily usage limitations.",
"Table TABREF24 exemplifies a complete and its respective incomplete sentences with different TTS-STT combinations, thus varying rates of missing and incorrect words. The level of noise in the STT imbued sentences is denoted by a inverted BLEU (iBLEU) score ranging from 0 to 1. The inverted BLEU score is denoted in Eq. (DISPLAY_FORM23):",
"where BLEU is a common metric usually used in machine translation tasks BIBREF21. We decide to showcase that instead of regular BLEU because it is more indicative to the amount of noise in the incomplete text, where the higher the iBLEU, the higher the noise."
],
[
"Besides the already mentioned BERT, the following baseline models are also used for comparison."
],
[
"We focus on the three following services, where the first two are commercial services and last one is open source with two separate backends: Google Dialogflow (formerly Api.ai) , SAP Conversational AI (formerly Recast.ai) and Rasa (spacy and tensorflow backend) ."
],
[
"Shridhar et al. BIBREF12 proposed a word embedding method that doesn't suffer from out-of-vocabulary issues. The authors achieve this by using hash tokens in the alphabet instead of a single word, making it vocabulary independent. For classification, classifiers such as Multilayer Perceptron (MLP), Support Vector Machine (SVM) and Random Forest are used. A complete list of classifiers and training specifications are given in Section SECREF31."
],
[
"The baseline and proposed models are each trained 3 separate times for the incomplete intent classification task: complete data and one for each of the TTS-STT combinations (gtts-witai and macsay-witai). Regarding the sentiment classification from incorrect sentences task, the baseline and proposed models are each trained 3 times: original text, corrected text and incorrect with correct texts. The reported F1 scores are the best accuracies obtained from 10 runs."
],
[
"No settable training configurations available in the online platforms."
],
[
"Trained on 3-gram, feature vector size of 768 as to match the BERT embedding size, and 13 classifiers with parameters set as specified in the authors' paper so as to allow comparison: MLP with 3 hidden layers of sizes $[300, 100, 50]$ respectively; Random Forest with 50 estimators or trees; 5-fold Grid Search with Random Forest classifier and estimator $([50, 60, 70]$; Linear Support Vector Classifier with L1 and L2 penalty and tolerance of $10^{-3}$; Regularized linear classifier with Stochastic Gradient Descent (SGD) learning with regularization term $alpha=10^{-4}$ and L1, L2 and Elastic-Net penalty; Nearest Centroid with Euclidian metric, where classification is done by representing each class with a centroid; Bernoulli Naive Bayes with smoothing parameter $alpha=10^{-2}$; K-means clustering with 2 clusters and L2 penalty; and Logistic Regression classifier with L2 penalty, tolerance of $10^{-4}$ and regularization term of $1.0$. Most often, the best performing classifier was MLP."
],
[
"Conventional BERT is a BERT-base-uncased model, meaning that it has 12 transformer blocks $L$, hidden size $H$ of 768, and 12 self-attention heads $A$. The model is fine-tuned with our dataset on 2 Titan X GPUs for 3 epochs with Adam Optimizer, learning rate of $2*10^{-5}$, maximum sequence length of 128, and warm up proportion of $0.1$. The train batch size is 4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus."
],
[
"Our proposed model is trained in end-to-end manner on 2 Titan X GPUs, with training time depending on the size of the dataset and train batch size. The stack of multilayer perceptrons are trained for 100 and 1,000 epochs with Adam Optimizer, learning rate of $10^{-3}$, weight decay of $10^{-5}$, MSE loss criterion and batch size the same as BERT (4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus)."
],
[
"Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\\%$ to 8$\\%$. We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\\%$ against BERT's 72$\\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. In that version, Stacked DeBERT achieves 82$\\%$ accuracy against BERT's 76$\\%$, an improvement of 6$\\%$. In the last case (Inc+Corr), we consider both incorrect and correct tweets as input to the models in hopes of improving performance. However, the accuracy was similar to the first aforementioned version, 80$\\%$ for our model and 74$\\%$ for the second highest performing model. Since the first and last corpus gave similar performances with our model, we conclude that the Twitter dataset does not require complete sentences to be given as training input, in addition to the original naturally incorrect tweets, in order to better model the noisy sentences.",
"In addition to the overall F1-score, we also present a confusion matrix, in Fig. FIGREF38, with the per-class F1-scores for BERT and Stacked DeBERT. The normalized confusion matrix plots the predicted labels versus the target/target labels. Similarly to Table TABREF37, we evaluate our model with the original Twitter dataset, the corrected version and both original and corrected tweets. It can be seen that our model is able to improve the overall performance by improving the accuracy of the lower performing classes. In the Inc dataset, the true class 1 in BERT performs with approximately 50%. However, Stacked DeBERT is able to improve that to 72%, although to a cost of a small decrease in performance of class 0. A similar situation happens in the remaining two datasets, with improved accuracy in class 0 from 64% to 84% and 60% to 76% respectively."
],
[
"Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.",
"The table also indicates the level of noise in each dataset with the already mentioned iBLEU score, where 0 means no noise and higher values mean higher quantity of noise. As expected, the models' accuracy degrade with the increase in noise, thus F1-scores of gtts-witai are higher than macsay-witai. However, while the other models decay rapidly in the presence of noise, our model does not only outperform them but does so with a wider margin. This is shown with the increasing robustness curve in Fig. FIGREF41 and can be demonstrated by macsay-witai outperforming the baseline models by twice the gap achieved by gtts-witai.",
"Further analysis of the results in Table TABREF40 show that, BERT decay is almost constant with the addition of noise, with the difference between the complete data and gtts-witai being 1.88 and gtts-witai and macsay-witai being 1.89. Whereas in Stacked DeBERT, that difference is 1.89 and 0.94 respectively. This is stronger indication of our model's robustness in the presence of noise.",
"Additionally, we also present Fig. FIGREF42 with the normalized confusion matrices for BERT and Stacked DeBERT for sentences containing STT error. Analogously to the Twitter Sentiment Classification task, the per-class F1-scores show that our model is able to improve the overall performance by improving the accuracy of one class while maintaining the high-achieving accuracy of the second one."
],
[
"In this work, we proposed a novel deep neural network, robust to noisy text in the form of sentences with missing and/or incorrect words, called Stacked DeBERT. The idea was to improve the accuracy performance by improving the representation ability of the model with the implementation of novel denoising transformers. More specifically, our model was able to reconstruct hidden embeddings from their respective incomplete hidden embeddings. Stacked DeBERT was compared against three NLU service platforms and two other machine learning methods, namely BERT and Semantic Hashing with neural classifier. Our model showed better performance when evaluated on F1 scores in both Twitter sentiment and intent text with STT error classification tasks. The per-class F1 score was also evaluated in the form of normalized confusion matrices, showing that our model was able to improve the overall performance by better balancing the accuracy of each class, trading-off small decreases in high achieving class for significant improvements in lower performing ones. In the Chatbot dataset, accuracy improvement was achieved even without trade-off, with the highest achieving classes maintaining their accuracy while the lower achieving class saw improvement. Further evaluation on the F1-scores decay in the presence of noise demonstrated that our model is more robust than the baseline models when considering noisy data, be that in the form of incorrect sentences or sentences with STT error. Not only that, experiments on the Twitter dataset also showed improved accuracy in clean data, with complete sentences. We infer that this is due to our model being able to extract richer data representations from the input data regardless of the completeness of the sentence. For future works, we plan on evaluating the robustness of our model against other types of noise, such as word reordering, word insertion, and spelling mistakes in sentences. In order to improve the performance of our model, further experiments will be done in search for more appropriate hyperparameters and more complex neural classifiers to substitute the last feedforward network layer."
],
[
"This work was partly supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (2016-0-00564, Development of Intelligent Interaction Technology Based on Context Awareness and Human Intention Understanding) and Korea Evaluation Institute of Industrial Technology (KEIT) grant funded by the Korea government (MOTIE) (50%) and the Technology Innovation Program: Industrial Strategic Technology Development Program (No: 10073162) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea) (50%)."
]
]
} | {
"question": [
"Do they report results only on English datasets?",
"How do the authors define or exemplify 'incorrect words'?",
"How many vanilla transformers do they use after applying an embedding layer?",
"Do they test their approach on a dataset without incomplete data?",
"Should their approach be applied only when dealing with incomplete data?",
"By how much do they outperform other models in the sentiment in intent classification tasks?"
],
"question_id": [
"637aa32a34b20b4b0f1b5dfa08ef4e0e5ed33d52",
"4b8257cdd9a60087fa901da1f4250e7d910896df",
"7e161d9facd100544fa339b06f656eb2fc64ed28",
"abc5836c54fc2ac8465aee5a83b9c0f86c6fd6f5",
"4debd7926941f1a02266b1a7be2df8ba6e79311a",
"3b745f086fb5849e7ce7ce2c02ccbde7cfdedda5"
],
"nlp_background": [
"five",
"five",
"infinity",
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"twitter",
"twitter",
"",
"",
"",
""
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"Even though this corpus has incorrect sentences and their emotional labels, they lack their respective corrected sentences, necessary for the training of our model. In order to obtain this missing information, we outsource native English speakers from an unbiased and anonymous platform, called Amazon Mechanical Turk (MTurk) BIBREF19, which is a paid marketplace for Human Intelligence Tasks (HITs). We use this platform to create tasks for native English speakers to format the original incorrect tweets into correct sentences. Some examples are shown in Table TABREF12.",
"The dataset used to evaluate the models' performance is the Chatbot Natural Language Unerstanding (NLU) Evaluation Corpus, introduced by Braun et al. BIBREF20 to test NLU services. It is a publicly available benchmark and is composed of sentences obtained from a German Telegram chatbot used to answer questions about public transport connections. The dataset has two intents, namely Departure Time and Find Connection with 100 train and 106 test samples, shown in Table TABREF18. Even though English is the main language of the benchmark, this dataset contains a few German station and street names."
],
"highlighted_evidence": [
"Even though this corpus has incorrect sentences and their emotional labels, they lack their respective corrected sentences, necessary for the training of our model. In order to obtain this missing information, we outsource native English speakers from an unbiased and anonymous platform, called Amazon Mechanical Turk (MTurk) BIBREF19, which is a paid marketplace for Human Intelligence Tasks (HITs). We use this platform to create tasks for native English speakers to format the original incorrect tweets into correct sentences. Some examples are shown in Table TABREF12.",
"The dataset used to evaluate the models' performance is the Chatbot Natural Language Unerstanding (NLU) Evaluation Corpus, introduced by Braun et al. BIBREF20 to test NLU services. It is a publicly available benchmark and is composed of sentences obtained from a German Telegram chatbot used to answer questions about public transport connections. The dataset has two intents, namely Departure Time and Find Connection with 100 train and 106 test samples, shown in Table TABREF18. Even though English is the main language of the benchmark, this dataset contains a few German station and street names."
]
}
],
"annotation_id": [
"c7a83f3225e54b6306ef3372507539e471c155d0"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "typos in spellings or ungrammatical words",
"evidence": [
"Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This scenario is likely to happen when one considers human error done in writing. In fact, it is rather naive to assume that users will always type fully grammatically correct sentences. Panko BIBREF0 goes as far as claiming that human accuracy regarding research paper writing is none when considering the entire document. This has been aggravated with the advent of internet and social networks, which allowed language and modern communication to be been rapidly transformed BIBREF1, BIBREF2. Take Twitter for instance, where information is expected to be readily communicated in short and concise sentences with little to no regard to correct sentence grammar or word spelling BIBREF3."
],
"highlighted_evidence": [
"Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This scenario is likely to happen when one considers human error done in writing. In fact, it is rather naive to assume that users will always type fully grammatically correct sentences. "
]
}
],
"annotation_id": [
"7c44e07bb8f2884cd73dd023e86dfeb7241e999c"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"6d5e9774c1d04b3cac91fcc7ac9fd6ff56d9bc63"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"The incomplete dataset used for training is composed of lower-cased incomplete data obtained by manipulating the original corpora. The incomplete sentences with STT error are obtained in a 2-step process shown in Fig. FIGREF22. The first step is to apply a TTS module to the available complete sentence. Here, we apply gtts , a Google Text-to-Speech python library, and macsay , a terminal command available in Mac OS as say. The second step consists of applying an STT module to the obtained audio files in order to obtain text containing STT errors. The STT module used here was witai , freely available and maintained by Wit.ai. The mentioned TTS and STT modules were chosen according to code availability and whether it's freely available or has high daily usage limitations."
],
"highlighted_evidence": [
"The incomplete dataset used for training is composed of lower-cased incomplete data obtained by manipulating the original corpora."
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"In order to evaluate the performance of our model, we need access to a naturally noisy dataset with real human errors. Poor quality texts obtained from Twitter, called tweets, are then ideal for our task. For this reason, we choose Kaggle's two-class Sentiment140 dataset BIBREF18, which consists of spoken text being used in writing and without strong consideration for grammar or sentence correctness. Thus, it has many mistakes, as specified in Table TABREF11.",
"In the intent classification task, we are presented with a corpus that suffers from the opposite problem of the Twitter sentiment classification corpus. In the intent classification corpus, we have the complete sentences and intent labels but lack their corresponding incomplete sentences, and since our task revolves around text classification in incomplete or incorrect data, it is essential that we obtain this information. To remedy this issue, we apply a Text-to-Speech (TTS) module followed by a Speech-to-Text (STT) module to the complete sentences in order to obtain incomplete sentences with STT error. Due to TTS and STT modules available being imperfect, the resulting sentences have a reasonable level of noise in the form of missing or incorrectly transcribed words. Analysis on this dataset adds value to our work by enabling evaluation of our model's robustness to different rates of data incompleteness."
],
"highlighted_evidence": [
"For this reason, we choose Kaggle's two-class Sentiment140 dataset BIBREF18, which consists of spoken text being used in writing and without strong consideration for grammar or sentence correctness. Thus, it has many mistakes, as specified in Table TABREF11.",
"In the intent classification corpus, we have the complete sentences and intent labels but lack their corresponding incomplete sentences, and since our task revolves around text classification in incomplete or incorrect data, it is essential that we obtain this information. To remedy this issue, we apply a Text-to-Speech (TTS) module followed by a Speech-to-Text (STT) module to the complete sentences in order to obtain incomplete sentences with STT error. Due to TTS and STT modules available being imperfect, the resulting sentences have a reasonable level of noise in the form of missing or incorrectly transcribed words."
]
}
],
"annotation_id": [
"b36dcc41db3d7aa7503fe85cbc1793b27473e4ed",
"f0dd380c67caba4c7c3fe0ee9b8185f4923ed868"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\\%$ to 8$\\%$. We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\\%$ against BERT's 72$\\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. In that version, Stacked DeBERT achieves 82$\\%$ accuracy against BERT's 76$\\%$, an improvement of 6$\\%$. In the last case (Inc+Corr), we consider both incorrect and correct tweets as input to the models in hopes of improving performance. However, the accuracy was similar to the first aforementioned version, 80$\\%$ for our model and 74$\\%$ for the second highest performing model. Since the first and last corpus gave similar performances with our model, we conclude that the Twitter dataset does not require complete sentences to be given as training input, in addition to the original naturally incorrect tweets, in order to better model the noisy sentences."
],
"highlighted_evidence": [
"We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\\%$ against BERT's 72$\\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. "
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"We propose Stacked Denoising BERT (DeBERT) as a novel encoding scheming for the task of incomplete intent classification and sentiment classification from incorrect sentences, such as tweets and text with STT error. The proposed model, illustrated in Fig. FIGREF4, is structured as a stacking of embedding layers and vanilla transformer layers, similarly to the conventional BERT BIBREF10, followed by layers of novel denoising transformers. The main purpose of this model is to improve the robustness and efficiency of BERT when applied to incomplete data by reconstructing hidden embeddings from sentences with missing words. By reconstructing these hidden embeddings, we are able to improve the encoding scheme in BERT."
],
"highlighted_evidence": [
"The main purpose of this model is to improve the robustness and efficiency of BERT when applied to incomplete data by reconstructing hidden embeddings from sentences with missing words. "
]
}
],
"annotation_id": [
"2b4f582794c836ce6cde20b07b5f754cb67f8e20",
"c6bacbe8041fdef389e98b119b050cb03cce14e1"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "In the sentiment classification task by 6% to 8% and in the intent classification task by 0.94% on average",
"evidence": [
"Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\\%$ to 8$\\%$. We evaluate our model and baseline models on three versions of the dataset. The first one (Inc) only considers the original data, containing naturally incorrect tweets, and achieves accuracy of 80$\\%$ against BERT's 72$\\%$. The second version (Corr) considers the corrected tweets, and shows higher accuracy given that it is less noisy. In that version, Stacked DeBERT achieves 82$\\%$ accuracy against BERT's 76$\\%$, an improvement of 6$\\%$. In the last case (Inc+Corr), we consider both incorrect and correct tweets as input to the models in hopes of improving performance. However, the accuracy was similar to the first aforementioned version, 80$\\%$ for our model and 74$\\%$ for the second highest performing model. Since the first and last corpus gave similar performances with our model, we conclude that the Twitter dataset does not require complete sentences to be given as training input, in addition to the original naturally incorrect tweets, in order to better model the noisy sentences.",
"Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.",
"FLOAT SELECTED: Table 7: F1-micro scores for original sentences and sentences imbued with STT error in the Chatbot Corpus. The noise level is represented by the iBLEU score (See Eq. (5))."
],
"highlighted_evidence": [
"Experimental results for the Twitter Sentiment Classification task on Kaggle's Sentiment140 Corpus dataset, displayed in Table TABREF37, show that our model has better F1-micros scores, outperforming the baseline models by 6$\\%$ to 8$\\%$. ",
"Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40. When presented with data containing STT error, our model outperforms all baseline models in both combinations of TTS-STT: gtts-witai outperforms the second placing baseline model by 0.94% with F1-score of 97.17%, and macsay-witai outperforms the next highest achieving model by 1.89% with F1-score of 96.23%.",
"FLOAT SELECTED: Table 7: F1-micro scores for original sentences and sentences imbued with STT error in the Chatbot Corpus. The noise level is represented by the iBLEU score (See Eq. (5))."
]
}
],
"annotation_id": [
"1f4a6fce4f78662774735b1e27744f55b0efd7a8"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
]
} | {
"caption": [
"Figure 1: The proposed model Stacked DeBERT is organized in three layers: embedding, conventional bidirectional transformers and denoising bidirectional transformer.",
"Table 1: Types of mistakes on the Twitter dataset.",
"Table 2: Examples of original tweets and their corrected version.",
"Table 3: Details about our Twitter Sentiment Classification dataset, composed of incorrect and correct data.",
"Table 4: Details about our Incomplete Intent Classification dataset based on the Chatbot NLU Evaluation Corpus.",
"Figure 2: Diagram of 2-step process to obtain dataset with STT error in text.",
"Table 5: Example of sentence from Chatbot NLU Corpus with different TTS-STT combinations and their respective inverted BLEU score (denotes the level of noise in the text).",
"Table 6: F1-micro scores for Twitter Sentiment Classification task on Kaggle’s Sentiment140 Corpus. Note that: (Inc) is the original dataset, with naturally incorrect tweets, (Corr) is the corrected version of the dataset and (Inc+Corr) contains both.",
"Figure 3: Normalized confusion matrix for the Twitter Sentiment Classification dataset. The first row has the confusion matrices for BERT in the original Twitter dataset (Inc), the corrected version (Corr) and both original and corrected tweets (Inc+Corr) respectively. The second row contains the confusion matrices for Stacked DeBERT in the same order.",
"Table 7: F1-micro scores for original sentences and sentences imbued with STT error in the Chatbot Corpus. The noise level is represented by the iBLEU score (See Eq. (5)).",
"Figure 4: Robustness curve for the Chatbot NLU Corpus with STT error.",
"Figure 5: Normalized confusion matrix for the Chatbot NLU Intent Classification dataset for complete data and data with STT error. The first column has the confusion matrices for BERT and the second for Stacked DeBERT."
],
"file": [
"5-Figure1-1.png",
"8-Table1-1.png",
"9-Table2-1.png",
"9-Table3-1.png",
"10-Table4-1.png",
"11-Figure2-1.png",
"12-Table5-1.png",
"14-Table6-1.png",
"15-Figure3-1.png",
"16-Table7-1.png",
"17-Figure4-1.png",
"17-Figure5-1.png"
]
} |
1910.03042 | Gunrock: A Social Bot for Complex and Engaging Long Conversations | Gunrock is the winner of the 2018 Amazon Alexa Prize, as evaluated by coherence and engagement from both real users and Amazon-selected expert conversationalists. We focus on understanding complex sentences and having in-depth conversations in open domains. In this paper, we introduce some innovative system designs and related validation analysis. Overall, we found that users produce longer sentences to Gunrock, which are directly related to users' engagement (e.g., ratings, number of turns). Additionally, users' backstory queries about Gunrock are positively correlated to user satisfaction. Finally, we found dialog flows that interleave facts and personal opinions and stories lead to better user satisfaction. | {
"section_name": [
"Introduction",
"System Architecture",
"System Architecture ::: Automatic Speech Recognition",
"System Architecture ::: Natural Language Understanding",
"System Architecture ::: Dialog Manager",
"System Architecture ::: Knowledge Databases",
"System Architecture ::: Natural Language Generation",
"System Architecture ::: Text To Speech",
"Analysis",
"Analysis ::: Response Depth: Mean Word Count",
"Analysis ::: Gunrock's Backstory and Persona",
"Analysis ::: Interleaving Personal and Factual Information: Animal Module",
"Conclusion",
"Acknowledgments"
],
"paragraphs": [
[
"Amazon Alexa Prize BIBREF0 provides a platform to collect real human-machine conversation data and evaluate performance on speech-based social conversational systems. Our system, Gunrock BIBREF1 addresses several limitations of prior chatbots BIBREF2, BIBREF3, BIBREF4 including inconsistency and difficulty in complex sentence understanding (e.g., long utterances) and provides several contributions: First, Gunrock's multi-step language understanding modules enable the system to provide more useful information to the dialog manager, including a novel dialog act scheme. Additionally, the natural language understanding (NLU) module can handle more complex sentences, including those with coreference. Second, Gunrock interleaves actions to elicit users' opinions and provide responses to create an in-depth, engaging conversation; while a related strategy to interleave task- and non-task functions in chatbots has been proposed BIBREF5, no chatbots to our knowledge have employed a fact/opinion interleaving strategy. Finally, we use an extensive persona database to provide coherent profile information, a critical challenge in building social chatbots BIBREF3. Compared to previous systems BIBREF4, Gunrock generates more balanced conversations between human and machine by encouraging and understanding more human inputs (see Table TABREF2 for an example)."
],
[
"Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts). While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1."
],
[
"Gunrock receives ASR results with the raw text and timestep information for each word in the sequence (without case information and punctuation). Keywords, especially named entities such as movie names, are prone to generate ASR errors without contextual information, but are essential for NLU and NLG. Therefore, Gunrock uses domain knowledge to correct these errors by comparing noun phrases to a knowledge base (e.g. a list of the most popular movies names) based on their phonetic information. We extract the primary and secondary code using The Double Metaphone Search Algorithm BIBREF8 for noun phrases (extracted by noun trunks) and the selected knowledge base, and suggest a potential fix by code matching. An example can be seen in User_3 and Gunrock_3 in Table TABREF2."
],
[
"Gunrock is designed to engage users in deeper conversation; accordingly, a user utterance can consist of multiple units with complete semantic meanings. We first split the corrected raw ASR text into sentences by inserting break tokens. An example is shown in User_3 in Table TABREF2. Meanwhile, we mask named entities before segmentation so that a named entity will not be segmented into multiple parts and an utterance with a complete meaning is maintained (e.g.,“i like the movie a star is born\"). We also leverage timestep information to filter out false positive corrections. After segmentation, our coreference implementation leverages entity knowledge (such as person versus event) and replaces nouns with their actual reference by entity ranking. We implement coreference resolution on entities both within segments in a single turn as well as across multiple turns. For instance, “him\" in the last segment in User_5 is replaced with “bradley cooper\" in Table TABREF2. Next, we use a constituency parser to generate noun phrases from each modified segment. Within the sequence pipeline to generate complete segments, Gunrock detects (1) topic, (2) named entities, and (3) sentiment using ASK in parallel. The NLU module uses knowledge graphs including Google Knowledge Graph to call for a detailed description of each noun phrase for understanding.",
"In order to extract the intent for each segment, we designed MIDAS, a human-machine dialog act scheme with 23 tags and implemented a multi-label dialog act classification model using contextual information BIBREF9. Next, the NLU components analyzed on each segment in a user utterance are sent to the DM and NLG module for state tracking and generation, respectively."
],
[
"We implemented a hierarchical dialog manager, consisting of a high level and low level DMs. The former leverages NLU outputs for each segment and selects the most important segment for the system as the central element using heuristics. For example, “i just finished reading harry potter,\" triggers Sub-DM: Books. Utilizing the central element and features extracted from NLU, input utterances are mapped onto 11 possible topic dialog modules (e.g., movies, books, animals, etc.), including a backup module, retrieval.",
"Low level dialog management is handled by the separate topic dialog modules, which use modular finite state transducers to execute various dialog segments processed by the NLU. Using topic-specific modules enables deeper conversations that maintain the context. We design dialog flows in each of the finite state machines, as well. Dialog flow is determined by rule-based transitions between a specified fixed set of dialog states. To ensure that our states and transitions are effective, we leverage large scale user data to find high probability responses and high priority responses to handle in different contexts. Meanwhile, dialog flow is customized to each user by tracking user attributes as dialog context. In addition, each dialog flow is adaptive to user responses to show acknowledgement and understanding (e.g., talking about pet ownership in the animal module). Based on the user responses, many dialog flow variations exist to provide a fresh experience each time. This reduces the feeling of dialogs being scripted and repetitive. Our dialog flows additionally interleave facts, opinions, experiences, and questions to make the conversation flexible and interesting.",
"In the meantime, we consider feedback signals such as “continue\" and “stop\" from the current topic dialog module, indicating whether it is able to respond to the following request in the dialog flow, in order to select the best response module. Additionally, in all modules we allow mixed-initiative interactions; users can trigger a new dialog module when they want to switch topics while in any state. For example, users can start a new conversation about movies from any other topic module."
],
[
"All topic dialog modules query knowledge bases to provide information to the user. To respond to general factual questions, Gunrock queries the EVI factual database , as well as other up-to-date scraped information appropriate for the submodule, such as news and current showing movies in a specific location from databases including IMDB. One contribution of Gunrock is the extensive Gunrock Persona Backstory database, consisting of over 1,000 responses to possible questions for Gunrock as well as reasoning for her responses for roughly 250 questions (see Table 2). We designed the system responses to elicit a consistent personality within and across modules, modeled as a female individual who is positive, outgoing, and is interested in science and technology."
],
[
"In order to avoid repetitive and non-specific responses commonly seen in dialog systems BIBREF10, Gunrock uses a template manager to select from a handcrafted response templates based on the dialog state. One dialog state can map to multiple response templates with similar semantic or functional content but differing surface forms. Among these response templates for the same dialog state, one is randomly selected without repetition to provide variety unless all have been exhausted. When a response template is selected, any slots are substituted with actual contents, including queried information for news and specific data for weather. For example, to ground a movie name due to ASR errors or multiple versions, one template is “Are you talking about {movie_title} released in {release_year} starring {actor_name} as {actor_role}?\". Module-specific templates were generated for each topic (e.g., animals), but some of the templates are generalizable across different modules (e.g., “What’s your favorite [movie $|$ book $|$ place to visit]?\")",
"In many cases, response templates corresponding to different dialog acts are dynamically composed to give the final response. For example, an appropriate acknowledgement for the user’s response can be combined with a predetermined follow-up question."
],
[
"After NLG, we adjust the TTS of the system to improve the expressiveness of the voice to convey that the system is an engaged and active participant in the conversation. We use a rule-based system to systematically add interjections, specifically Alexa Speechcons, and fillers to approximate human-like cognitive-emotional expression BIBREF11. For more on the framework and analysis of the TTS modifications, see BIBREF12."
],
[
"From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?\"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets)."
],
[
"Two unique features of Gunrock are its ability to dissect longer, complex sentences, and its methods to encourage users to be active conversationalists, elaborating on their responses. In prior work, even if users are able to drive the conversation, often bots use simple yes/no questions to control the conversational flow to improve understanding; as a result, users are more passive interlocutors in the conversation. We aimed to improve user engagement by designing the conversation to have more open-ended opinion/personal questions, and show that the system can understand the users' complex utterances (See nlu for details on NLU). Accordingly, we ask if users' speech behavior will reflect Gunrock's technical capability and conversational strategy, producing longer sentences.",
"We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions.",
"Results showed that users who, on average, produced utterances with more words gave significantly higher ratings ($\\beta $=0.01, SE=0.002, t=4.79, p$<$0.001)(see Figure 2) and engaged with Gunrock for significantly greater number of turns ($\\beta $=1.85, SE=0.05, t=35.58, p$<$0.001) (see Figure 2). These results can be interpreted as evidence for Gunrock's ability to handle complex sentences, where users are not constrained to simple responses to be understood and feel engaged in the conversation – and evidence that individuals are more satisfied with the conversation when they take a more active role, rather than the system dominating the dialog. On the other hand, another interpretation is that users who are more talkative may enjoy talking to the bot in general, and thus give higher ratings in tandem with higher average word counts."
],
[
"We assessed the user's interest in Gunrock by tagging instances where the user triggered Gunrock's backstory (e.g., “What's your favorite color?\"). For users with at least one backstory question, we modeled overall (log) Rating with a linear regression by the (log) `Number of Backstory Questions Asked' (log transformed due to the variables' nonlinear relationship). We hypothesized that users who show greater curiosity about Gunrock will display higher overall ratings for the conversation based on her responses. Overall, the number of times users queried Gunrock's backstory was strongly related to the rating they gave at the end of the interaction (log:$\\beta $=0.10, SE=0.002, t=58.4, p$<$0.001)(see Figure 3). This suggests that maintaining a consistent personality — and having enough responses to questions the users are interested in — may improve user satisfaction."
],
[
"Gunrock includes a specific topic module on animals, which includes a factual component where the system provides animal facts, as well as a more personalized component about pets. Our system is designed to engage users about animals in a more casual conversational style BIBREF14, eliciting follow-up questions if the user indicates they have a pet; if we are able to extract the pet's name, we refer to it in the conversation (e.g., “Oliver is a great name for a cat!\", “How long have you had Oliver?\"). In cases where the user does not indicate that they have a pet, the system solely provides animal facts. Therefore, the animal module can serve as a test of our interleaving strategy: we hypothesized that combining facts and personal questions — in this case about the user's pet — would lead to greater user satisfaction overall.",
"We extracted conversations where Gunrock asked the user if they had ever had a pet and categorized responses as “Yes\", “No\", or “NA\" (if users did not respond with an affirmative or negative response). We modeled user rating with a linear regression model, with predictor of “Has Pet' (2 levels: Yes, No). We found that users who talked to Gunrock about their pet showed significantly higher overall ratings of the conversation ($\\beta $=0.15, SE=0.06, t=2.53, p$=$0.016) (see Figure 4). One interpretation is that interleaving factual information with more in-depth questions about their pet result in improved user experience. Yet, another interpretation is that pet owners may be more friendly and amenable to a socialbot; for example, prior research has linked differences in personality to pet ownership BIBREF15."
],
[
"Gunrock is a social chatbot that focuses on having long and engaging speech-based conversations with thousands of real users. Accordingly, our architecture employs specific modules to handle longer and complex utterances and encourages users to be more active in a conversation. Analysis shows that users' speech behavior reflects these capabilities. Longer sentences and more questions about Gunrocks's backstory positively correlate with user experience. Additionally, we find evidence for interleaved dialog flow, where combining factual information with personal opinions and stories improve user satisfaction. Overall, this work has practical applications, in applying these design principles to other social chatbots, as well as theoretical implications, in terms of the nature of human-computer interaction (cf. 'Computers are Social Actors' BIBREF16). Our results suggest that users are engaging with Gunrock in similar ways to other humans: in chitchat about general topics (e.g., animals, movies, etc.), taking interest in Gunrock's backstory and persona, and even producing more information about themselves in return."
],
[
"We would like to acknowledge the help from Amazon in terms of financial and technical support."
]
]
} | {
"question": [
"What is the sample size of people used to measure user satisfaction?",
"What are all the metrics to measure user engagement?",
"What the system designs introduced?",
"Do they specify the model they use for Gunrock?",
"Do they gather explicit user satisfaction data on Gunrock?",
"How do they correlate user backstory queries to user satisfaction?"
],
"question_id": [
"44c7c1fbac80eaea736622913d65fe6453d72828",
"3e0c9469821cb01a75e1818f2acb668d071fcf40",
"a725246bac4625e6fe99ea236a96ccb21b5f30c6",
"516626825e51ca1e8a3e0ac896c538c9d8a747c8",
"77af93200138f46bb178c02f710944a01ed86481",
"71538776757a32eee930d297f6667cd0ec2e9231"
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"34,432 user conversations"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?\"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets)."
],
"highlighted_evidence": [
" Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?\")."
]
},
{
"unanswerable": false,
"extractive_spans": [
"34,432 "
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Amazon Alexa Prize BIBREF0 provides a platform to collect real human-machine conversation data and evaluate performance on speech-based social conversational systems. Our system, Gunrock BIBREF1 addresses several limitations of prior chatbots BIBREF2, BIBREF3, BIBREF4 including inconsistency and difficulty in complex sentence understanding (e.g., long utterances) and provides several contributions: First, Gunrock's multi-step language understanding modules enable the system to provide more useful information to the dialog manager, including a novel dialog act scheme. Additionally, the natural language understanding (NLU) module can handle more complex sentences, including those with coreference. Second, Gunrock interleaves actions to elicit users' opinions and provide responses to create an in-depth, engaging conversation; while a related strategy to interleave task- and non-task functions in chatbots has been proposed BIBREF5, no chatbots to our knowledge have employed a fact/opinion interleaving strategy. Finally, we use an extensive persona database to provide coherent profile information, a critical challenge in building social chatbots BIBREF3. Compared to previous systems BIBREF4, Gunrock generates more balanced conversations between human and machine by encouraging and understanding more human inputs (see Table TABREF2 for an example).",
"From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?\"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets)."
],
"highlighted_evidence": [
"Amazon Alexa Prize BIBREF0 provides a platform to collect real human-machine conversation data and evaluate performance on speech-based social conversational systems. Our system, Gunrock BIBREF1 addresses several limitations of prior chatbots BIBREF2, BIBREF3, BIBREF4 including inconsistency and difficulty in complex sentence understanding (e.g., long utterances) and provides several contributions: First, Gunrock's multi-step language understanding modules enable the system to provide more useful information to the dialog manager, including a novel dialog act scheme. ",
"From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock.",
"We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations."
]
}
],
"annotation_id": [
"a7ea8bc335b1a8d974c2b6a518d4efb4b9905549",
"b9f1ba799b2d213f5d7ce0b1e03adcac6ad30772"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"overall rating",
"mean number of turns"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions."
],
"highlighted_evidence": [
" We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions."
]
},
{
"unanswerable": false,
"extractive_spans": [
"overall rating",
"mean number of turns"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions."
],
"highlighted_evidence": [
"We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions."
]
}
],
"annotation_id": [
"430a57dc6dc6a57617791e25e886c1b8d5ad6c35",
"ea5628650f48b7c9dac7c9255f29313a794748e0"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Amazon Conversational Bot Toolkit",
"natural language understanding (NLU) (nlu) module",
"dialog manager",
"knowledge bases",
"natural language generation (NLG) (nlg) module",
"text to speech (TTS) (tts)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts). While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1."
],
"highlighted_evidence": [
"We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts)."
]
}
],
"annotation_id": [
"7196fa2dc147c614e3dce0521e0ec664d2962f6f"
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts). While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1."
],
"highlighted_evidence": [
"We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response.",
"While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1."
]
}
],
"annotation_id": [
"88ef01edfa9b349e03b234f049663bd35c911e3b"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?\"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets)."
],
"highlighted_evidence": [
"Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?\")."
]
}
],
"annotation_id": [
"20c1065f9d96bb413f4d24665d0d30692ad2ded6"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions.",
"Results showed that users who, on average, produced utterances with more words gave significantly higher ratings ($\\beta $=0.01, SE=0.002, t=4.79, p$<$0.001)(see Figure 2) and engaged with Gunrock for significantly greater number of turns ($\\beta $=1.85, SE=0.05, t=35.58, p$<$0.001) (see Figure 2). These results can be interpreted as evidence for Gunrock's ability to handle complex sentences, where users are not constrained to simple responses to be understood and feel engaged in the conversation – and evidence that individuals are more satisfied with the conversation when they take a more active role, rather than the system dominating the dialog. On the other hand, another interpretation is that users who are more talkative may enjoy talking to the bot in general, and thus give higher ratings in tandem with higher average word counts."
],
"highlighted_evidence": [
"We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions.\n\nResults showed that users who, on average, produced utterances with more words gave significantly higher ratings ($\\beta $=0.01, SE=0.002, t=4.79, p$<$0.001)(see Figure 2) and engaged with Gunrock for significantly greater number of turns ($\\beta $=1.85, SE=0.05, t=35.58, p$<$0.001) (see Figure 2). These results can be interpreted as evidence for Gunrock's ability to handle complex sentences, where users are not constrained to simple responses to be understood and feel engaged in the conversation – and evidence that individuals are more satisfied with the conversation when they take a more active role, rather than the system dominating the dialog. On the other hand, another interpretation is that users who are more talkative may enjoy talking to the bot in general, and thus give higher ratings in tandem with higher average word counts."
]
}
],
"annotation_id": [
"9766d4b4b1500c83da733bd582476733ecd100ce"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Figure 1: Gunrock system architecture",
"Figure 2: Mean user rating by mean number of words. Error bars show standard error.",
"Figure 3: Mean user rating based on number of queries to Gunrock’s backstory. Error bars show standard error.",
"Figure 4: Mean user rating based ’Has Pet’. Error bars show standard error."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"5-Figure4-1.png"
]
} |
2002.06644 | Towards Detection of Subjective Bias using Contextualized Word Embeddings | Subjective bias detection is critical for applications like propaganda detection, content recommendation, sentiment analysis, and bias neutralization. This bias is introduced in natural language via inflammatory words and phrases, casting doubt over facts, and presupposing the truth. In this work, we perform comprehensive experiments for detecting subjective bias using BERT-based models on the Wiki Neutrality Corpus(WNC). The dataset consists of $360k$ labeled instances, from Wikipedia edits that remove various instances of the bias. We further propose BERT-based ensembles that outperform state-of-the-art methods like $BERT_{large}$ by a margin of $5.6$ F1 score. | {
"section_name": [
"Introduction",
"Baselines and Approach",
"Baselines and Approach ::: Baselines",
"Baselines and Approach ::: Proposed Approaches",
"Experiments ::: Dataset and Experimental Settings",
"Experiments ::: Experimental Results",
"Conclusion"
],
"paragraphs": [
[
"In natural language, subjectivity refers to the aspects of communication used to express opinions, evaluations, and speculationsBIBREF0, often influenced by one's emotional state and viewpoints. Writers and editors of texts like news and textbooks try to avoid the use of biased language, yet subjective bias is pervasive in these texts. More than $56\\%$ of Americans believe that news sources do not report the news objectively , thus implying the prevalence of the bias. Therefore, when presenting factual information, it becomes necessary to differentiate subjective language from objective language.",
"There has been considerable work on capturing subjectivity using text-classification models ranging from linguistic-feature-based modelsBIBREF1 to finetuned pre-trained word embeddings like BERTBIBREF2. The detection of bias-inducing words in a Wikipedia statement was explored in BIBREF1. The authors propose the \"Neutral Point of View\" (NPOV) corpus made using Wikipedia revision history, containing Wikipedia edits that are specifically designed to remove subjective bias. They use logistic regression with linguistic features, including factive verbs, hedges, and subjective intensifiers to detect bias-inducing words. In BIBREF2, the authors extend this work by mitigating subjective bias after detecting bias-inducing words using a BERT-based model. However, they primarily focused on detecting and mitigating subjective bias for single-word edits. We extend their work by incorporating multi-word edits by detecting bias at the sentence level. We further use their version of the NPOV corpus called Wiki Neutrality Corpus(WNC) for this work.",
"The task of detecting sentences containing subjective bias rather than individual words inducing the bias has been explored in BIBREF3. However, they conduct majority of their experiments in controlled settings, limiting the type of articles from which the revisions were extracted. Their attempt to test their models in a general setting is dwarfed by the fact that they used revisions from a single Wikipedia article resulting in just 100 instances to evaluate their proposed models robustly. Consequently, we perform our experiments in the complete WNC corpus, which consists of $423,823$ revisions in Wikipedia marked by its editors over a period of 15 years, to simulate a more general setting for the bias.",
"In this work, we investigate the application of BERT-based models for the task of subjective language detection. We explore various BERT-based models, including BERT, RoBERTa, ALBERT, with their base and large specifications along with their native classifiers. We propose an ensemble model exploiting predictions from these models using multiple ensembling techniques. We show that our model outperforms the baselines by a margin of $5.6$ of F1 score and $5.95\\%$ of Accuracy."
],
[
"In this section, we outline baseline models like $BERT_{large}$. We further propose three approaches: optimized BERT-based models, distilled pretrained models, and the use of ensemble methods for the task of subjectivity detection."
],
[
"FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.",
"BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.",
"BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset."
],
[
"Optimized BERT-based models: We use BERT-based models optimized as in BIBREF6 and BIBREF7, pretrained on a dataset as large as twelve times as compared to $BERT_{large}$, with bigger batches, and longer sequences. ALBERT, introduced in BIBREF7, uses factorized embedding parameterization and cross-layer parameter sharing for parameter reduction. These optimizations have led both the models to outperform $BERT_{large}$ in various benchmarking tests, like GLUE for text classification and SQuAD for Question Answering.",
"Distilled BERT-based models: Secondly, we propose to use distilled BERT-based models, as introduced in BIBREF8. They are smaller general-purpose language representation model, pre-trained by leveraging distillation knowledge. This results in significantly smaller and faster models with performance comparable to their undistilled versions. We finetune these pretrained distilled models on the training corpus to efficiently detect subjectivity.",
"BERT-based ensemble models: Lastly, we use the weighted-average ensembling technique to exploit the predictions made by different variations of the above models. Ensembling methodology entails engendering a predictive model by utilizing predictions from multiple models in order to improve Accuracy and F1, decrease variance, and bias. We experiment with variations of $RoBERTa_{large}$, $ALBERT_{xxlarge.v2}$, $DistilRoBERTa$ and $BERT$ and outline selected combinations in tab:experimental-results."
],
[
"We perform our experiments on the WNC dataset open-sourced by the authors of BIBREF2. It consists of aligned pre and post neutralized sentences made by Wikipedia editors under the neutral point of view. It contains $180k$ biased sentences, and their neutral counterparts crawled from $423,823$ Wikipedia revisions between 2004 and 2019. We randomly shuffled these sentences and split this dataset into two parts in a $90:10$ Train-Test split and perform the evaluation on the held-out test dataset.",
"For all BERT-based models, we use a learning rate of $2*10^{-5}$, a maximum sequence length of 50, and a weight decay of $0.01$ while finetuning the model. We use FastText's recently open-sourced automatic hyperparameter optimization functionality while training the model. For the BiLSTM baseline, we use a dropout of $0.05$ along with a recurrent dropout of $0.2$ in two 64 unit sized stacked BiLSTMs, using softmax activation layer as the final dense layer."
],
[
"tab:experimental-results shows the performance of different models on the WNC corpus evaluated on the following four metrics: Precision, Recall, F1, and Accuracy. Our proposed methodology, the use of finetuned optimized BERT based models, and BERT-based ensemble models outperform the baselines for all the metrics.",
"Among the optimized BERT based models, $RoBERTa_{large}$ outperforms all other non-ensemble models and the baselines for all metrics. It further achieves a maximum recall of $0.681$ for all the proposed models. We note that DistillRoBERTa, a distilled model, performs competitively, achieving $69.69\\%$ accuracy, and $0.672$ F1 score. This observation shows that distilled pretrained models can replace their undistilled counterparts in a low-computing environment.",
"We further observe that ensemble models perform better than optimized BERT-based models and distilled pretrained models. Our proposed ensemble comprising of $RoBERTa_{large}$, $ALBERT_{xxlarge.v2}$, $DistilRoBERTa$ and $BERT$ outperforms all the proposed models obtaining $0.704$ F1 score, $0.733$ precision, and $71.61\\%$ Accuracy."
],
[
"In this paper, we investigated BERT-based architectures for sentence level subjective bias detection. We perform our experiments on a general Wikipedia corpus consisting of more than $360k$ pre and post subjective bias neutralized sentences. We found our proposed architectures to outperform the existing baselines significantly. BERT-based ensemble consisting of RoBERTa, ALBERT, DistillRoBERTa, and BERT led to the highest F1 and Accuracy. In the future, we would like to explore document-level detection of subjective bias, multi-word mitigation of the bias, applications of detecting the bias in recommendation systems."
]
]
} | {
"question": [
"Do the authors report only on English?",
"What is the baseline for the experiments?",
"Which experiments are perfomed?"
],
"question_id": [
"830de0bd007c4135302138ffa8f4843e4915e440",
"680dc3e56d1dc4af46512284b9996a1056f89ded",
"bd5379047c2cf090bea838c67b6ed44773bcd56f"
],
"nlp_background": [
"five",
"five",
"five"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"bias",
"bias",
"bias"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"dfc487e35ee5131bc5054463ace009e6bd8fc671"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"FastText",
"BiLSTM",
"BERT"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.",
"BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.",
"BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset."
],
"highlighted_evidence": [
"FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.",
"BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.",
"BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset."
]
},
{
"unanswerable": false,
"extractive_spans": [
"FastText",
"BERT ",
"two-layer BiLSTM architecture with GloVe word embeddings"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Baselines and Approach",
"In this section, we outline baseline models like $BERT_{large}$. We further propose three approaches: optimized BERT-based models, distilled pretrained models, and the use of ensemble methods for the task of subjectivity detection.",
"Baselines and Approach ::: Baselines",
"FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.",
"BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.",
"BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset.",
"FLOAT SELECTED: Table 1: Experimental Results for the Subjectivity Detection Task"
],
"highlighted_evidence": [
"Baselines and Approach\nIn this section, we outline baseline models like $BERT_{large}$. We further propose three approaches: optimized BERT-based models, distilled pretrained models, and the use of ensemble methods for the task of subjectivity detection.\n\n",
"Baselines and Approach ::: Baselines\nFastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.\n\nBiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.\n\nBERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset.",
"FLOAT SELECTED: Table 1: Experimental Results for the Subjectivity Detection Task"
]
}
],
"annotation_id": [
"23c76dd5ac11dd015f81868f3a8e1bafdf3d424c",
"2c63f673e8658e64600cc492bc7d6a48b56c2119"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "They used BERT-based models to detect subjective language in the WNC corpus",
"evidence": [
"In natural language, subjectivity refers to the aspects of communication used to express opinions, evaluations, and speculationsBIBREF0, often influenced by one's emotional state and viewpoints. Writers and editors of texts like news and textbooks try to avoid the use of biased language, yet subjective bias is pervasive in these texts. More than $56\\%$ of Americans believe that news sources do not report the news objectively , thus implying the prevalence of the bias. Therefore, when presenting factual information, it becomes necessary to differentiate subjective language from objective language.",
"In this work, we investigate the application of BERT-based models for the task of subjective language detection. We explore various BERT-based models, including BERT, RoBERTa, ALBERT, with their base and large specifications along with their native classifiers. We propose an ensemble model exploiting predictions from these models using multiple ensembling techniques. We show that our model outperforms the baselines by a margin of $5.6$ of F1 score and $5.95\\%$ of Accuracy.",
"Experiments ::: Dataset and Experimental Settings",
"We perform our experiments on the WNC dataset open-sourced by the authors of BIBREF2. It consists of aligned pre and post neutralized sentences made by Wikipedia editors under the neutral point of view. It contains $180k$ biased sentences, and their neutral counterparts crawled from $423,823$ Wikipedia revisions between 2004 and 2019. We randomly shuffled these sentences and split this dataset into two parts in a $90:10$ Train-Test split and perform the evaluation on the held-out test dataset."
],
"highlighted_evidence": [
"In natural language, subjectivity refers to the aspects of communication used to express opinions, evaluations, and speculationsBIBREF0, often influenced by one's emotional state and viewpoints.",
"In this work, we investigate the application of BERT-based models for the task of subjective language detection.",
"Experiments ::: Dataset and Experimental Settings\nWe perform our experiments on the WNC dataset open-sourced by the authors of BIBREF2. It consists of aligned pre and post neutralized sentences made by Wikipedia editors under the neutral point of view. It contains $180k$ biased sentences, and their neutral counterparts crawled from $423,823$ Wikipedia revisions between 2004 and 2019"
]
}
],
"annotation_id": [
"293dcdfb800de157c1c4be7641cd05512cc26fb2"
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
}
]
} | {
"caption": [
"Table 1: Experimental Results for the Subjectivity Detection Task"
],
"file": [
"2-Table1-1.png"
]
} |
1809.08731 | Sentence-Level Fluency Evaluation: References Help, But Can Be Spared! | Motivated by recent findings on the probabilistic modeling of acceptability judgments, we propose syntactic log-odds ratio (SLOR), a normalized language model score, as a metric for referenceless fluency evaluation of natural language generation output at the sentence level. We further introduce WPSLOR, a novel WordPiece-based version, which harnesses a more compact language model. Even though word-overlap metrics like ROUGE are computed with the help of hand-written references, our referenceless methods obtain a significantly higher correlation with human fluency scores on a benchmark dataset of compressed sentences. Finally, we present ROUGE-LM, a reference-based metric which is a natural extension of WPSLOR to the case of available references. We show that ROUGE-LM yields a significantly higher correlation with human judgments than all baseline metrics, including WPSLOR on its own. | {
"section_name": [
"Introduction",
"On Acceptability",
"Method",
"SLOR",
"WordPieces",
"WPSLOR",
"Experiment",
"Dataset",
"LM Hyperparameters and Training",
"Baseline Metrics",
"Correlation and Evaluation Scores",
"Results and Discussion",
"Analysis I: Fluency Evaluation per Compression System",
"Analysis II: Fluency Evaluation per Domain",
"Incorporation of Given References",
"Experimental Setup",
"Fluency Evaluation",
"Compression Evaluation",
"Criticism of Common Metrics for NLG",
"Conclusion",
"Acknowledgments"
],
"paragraphs": [
[
"Producing sentences which are perceived as natural by a human addressee—a property which we will denote as fluency throughout this paper —is a crucial goal of all natural language generation (NLG) systems: it makes interactions more natural, avoids misunderstandings and, overall, leads to higher user satisfaction and user trust BIBREF0 . Thus, fluency evaluation is important, e.g., during system development, or for filtering unacceptable generations at application time. However, fluency evaluation of NLG systems constitutes a hard challenge: systems are often not limited to reusing words from the input, but can generate in an abstractive way. Hence, it is not guaranteed that a correct output will match any of a finite number of given references. This results in difficulties for current reference-based evaluation, especially of fluency, causing word-overlap metrics like ROUGE BIBREF1 to correlate only weakly with human judgments BIBREF2 . As a result, fluency evaluation of NLG is often done manually, which is costly and time-consuming.",
"Evaluating sentences on their fluency, on the other hand, is a linguistic ability of humans which has been the subject of a decade-long debate in cognitive science. In particular, the question has been raised whether the grammatical knowledge that underlies this ability is probabilistic or categorical in nature BIBREF3 , BIBREF4 , BIBREF5 . Within this context, lau2017grammaticality have recently shown that neural language models (LMs) can be used for modeling human ratings of acceptability. Namely, they found SLOR BIBREF6 —sentence log-probability which is normalized by unigram log-probability and sentence length—to correlate well with acceptability judgments at the sentence level.",
"However, to the best of our knowledge, these insights have so far gone disregarded by the natural language processing (NLP) community. In this paper, we investigate the practical implications of lau2017grammaticality's findings for fluency evaluation of NLG, using the task of automatic compression BIBREF7 , BIBREF8 as an example (cf. Table 1 ). Specifically, we test our hypothesis that SLOR should be a suitable metric for evaluation of compression fluency which (i) does not rely on references; (ii) can naturally be applied at the sentence level (in contrast to the system level); and (iii) does not need human fluency annotations of any kind. In particular the first aspect, i.e., SLOR not needing references, makes it a promising candidate for automatic evaluation. Getting rid of human references has practical importance in a variety of settings, e.g., if references are unavailable due to a lack of resources for annotation, or if obtaining references is impracticable. The latter would be the case, for instance, when filtering system outputs at application time.",
"We further introduce WPSLOR, a novel, WordPiece BIBREF9 -based version of SLOR, which drastically reduces model size and training time. Our experiments show that both approaches correlate better with human judgments than traditional word-overlap metrics, even though the latter do rely on reference compressions. Finally, investigating the case of available references and how to incorporate them, we combine WPSLOR and ROUGE to ROUGE-LM, a novel reference-based metric, and increase the correlation with human fluency ratings even further."
],
[
"Acceptability judgments, i.e., speakers' judgments of the well-formedness of sentences, have been the basis of much linguistics research BIBREF10 , BIBREF11 : a speakers intuition about a sentence is used to draw conclusions about a language's rules. Commonly, “acceptability” is used synonymously with “grammaticality”, and speakers are in practice asked for grammaticality judgments or acceptability judgments interchangeably. Strictly speaking, however, a sentence can be unacceptable, even though it is grammatical – a popular example is Chomsky's phrase “Colorless green ideas sleep furiously.” BIBREF3 In turn, acceptable sentences can be ungrammatical, e.g., in an informal context or in poems BIBREF12 .",
"Scientists—linguists, cognitive scientists, psychologists, and NLP researcher alike—disagree about how to represent human linguistic abilities. One subject of debates are acceptability judgments: while, for many, acceptability is a binary condition on membership in a set of well-formed sentences BIBREF3 , others assume that it is gradient in nature BIBREF13 , BIBREF2 . Tackling this research question, lau2017grammaticality aimed at modeling human acceptability judgments automatically, with the goal to gain insight into the nature of human perception of acceptability. In particular, they tried to answer the question: Do humans judge acceptability on a gradient scale? Their experiments showed a strong correlation between human judgments and normalized sentence log-probabilities under a variety of LMs for artificial data they had created by translating and back-translating sentences with neural models. While they tried different types of LMs, best results were obtained for neural models, namely recurrent neural networks (RNNs).",
"In this work, we investigate if approaches which have proven successful for modeling acceptability can be applied to the NLP problem of automatic fluency evaluation."
],
[
"In this section, we first describe SLOR and the intuition behind this score. Then, we introduce WordPieces, before explaining how we combine the two."
],
[
"SLOR assigns to a sentence $S$ a score which consists of its log-probability under a given LM, normalized by unigram log-probability and length: ",
"$$\\text{SLOR}(S) = &\\frac{1}{|S|} (\\ln (p_M(S)) \\\\\\nonumber &- \\ln (p_u(S)))$$ (Eq. 8) ",
" where $p_M(S)$ is the probability assigned to the sentence under the LM. The unigram probability $p_u(S)$ of the sentence is calculated as ",
"$$p_u(S) = \\prod _{t \\in S}p(t)$$ (Eq. 9) ",
"with $p(t)$ being the unconditional probability of a token $t$ , i.e., given no context.",
"The intuition behind subtracting unigram log-probabilities is that a token which is rare on its own (in contrast to being rare at a given position in the sentence) should not bring down the sentence's rating. The normalization by sentence length is necessary in order to not prefer shorter sentences over equally fluent longer ones. Consider, for instance, the following pair of sentences: ",
"$$\\textrm {(i)} ~ ~ &\\textrm {He is a citizen of France.}\\nonumber \\\\\n\\textrm {(ii)} ~ ~ &\\textrm {He is a citizen of Tuvalu.}\\nonumber $$ (Eq. 11) ",
" Given that both sentences are of equal length and assuming that France appears more often in a given LM training set than Tuvalu, the length-normalized log-probability of sentence (i) under the LM would most likely be higher than that of sentence (ii). However, since both sentences are equally fluent, we expect taking each token's unigram probability into account to lead to a more suitable score for our purposes.",
"We calculate the probability of a sentence with a long-short term memory (LSTM, hochreiter1997long) LM, i.e., a special type of RNN LM, which has been trained on a large corpus. More details on LSTM LMs can be found, e.g., in sundermeyer2012lstm. The unigram probabilities for SLOR are estimated using the same corpus."
],
[
"Sub-word units like WordPieces BIBREF9 are getting increasingly important in NLP. They constitute a compromise between characters and words: On the one hand, they yield a smaller vocabulary, which reduces model size and training time, and improve handling of rare words, since those are partitioned into more frequent segments. On the other hand, they contain more information than characters.",
"WordPiece models are estimated using a data-driven approach which maximizes the LM likelihood of the training corpus as described in wu2016google and 6289079."
],
[
"We propose a novel version of SLOR, by incorporating a LM which is trained on a corpus which has been split by a WordPiece model. This leads to a smaller vocabulary, resulting in a LM with less parameters, which is faster to train (around 12h compared to roughly 5 days for the word-based version in our experiments). We will refer to the word-based SLOR as WordSLOR and to our newly proposed WordPiece-based version as WPSLOR."
],
[
"Now, we present our main experiment, in which we assess the performances of WordSLOR and WPSLOR as fluency evaluation metrics."
],
[
"We experiment on the compression dataset by toutanova2016dataset. It contains single sentences and two-sentence paragraphs from the Open American National Corpus (OANC), which belong to 4 genres: newswire, letters, journal, and non-fiction. Gold references are manually created and the outputs of 4 compression systems (ILP (extractive), NAMAS (abstractive), SEQ2SEQ (extractive), and T3 (abstractive); cf. toutanova2016dataset for details) for the test data are provided. Each example has 3 to 5 independent human ratings for content and fluency. We are interested in the latter, which is rated on an ordinal scale from 1 (disfluent) through 3 (fluent). We experiment on the 2955 system outputs for the test split.",
"Average fluency scores per system are shown in Table 2 . As can be seen, ILP produces the best output. In contrast, NAMAS is the worst system for fluency. In order to be able to judge the reliability of the human annotations, we follow the procedure suggested by TACL732 and used by toutanova2016dataset, and compute the quadratic weighted $\\kappa $ BIBREF14 for the human fluency scores of the system-generated compressions as $0.337$ ."
],
[
"We train our LSTM LMs on the English Gigaword corpus BIBREF15 , which consists of news data.",
"The hyperparameters of all LMs are tuned using perplexity on a held-out part of Gigaword, since we expect LM perplexity and final evaluation performance of WordSLOR and, respectively, WPSLOR to correlate. Our best networks consist of two layers with 512 hidden units each, and are trained for $2,000,000$ steps with a minibatch size of 128. For optimization, we employ ADAM BIBREF16 ."
],
[
"Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks. ROUGE-L measures the similarity of two sentences based on their longest common subsequence. Generated and reference compressions are tokenized and lowercased. For multiple references, we only make use of the one with the highest score for each example.",
"We compare to the best n-gram-overlap metrics from toutanova2016dataset; combinations of linguistic units (bi-grams (LR2) and tri-grams (LR3)) and scoring measures (recall (R) and F-score (F)). With multiple references, we consider the union of the sets of n-grams. Again, generated and reference compressions are tokenized and lowercased.",
"We further compare to the negative LM cross-entropy, i.e., the log-probability which is only normalized by sentence length. The score of a sentence $S$ is calculated as ",
"$$\\text{NCE}(S) = \\tfrac{1}{|S|} \\ln (p_M(S))$$ (Eq. 22) ",
"with $p_M(S)$ being the probability assigned to the sentence by a LM. We employ the same LMs as for SLOR, i.e., LMs trained on words (WordNCE) and WordPieces (WPNCE).",
"Our next baseline is perplexity, which corresponds to the exponentiated cross-entropy: ",
"$$\\text{PPL}(S) = \\exp (-\\text{NCE}(S))$$ (Eq. 24) ",
"Due to its popularity, we also performed initial experiments with BLEU BIBREF17 . Its correlation with human scores was so low that we do not consider it in our final experiments."
],
[
"Following earlier work BIBREF2 , we evaluate our metrics using Pearson correlation with human judgments. It is defined as the covariance divided by the product of the standard deviations: ",
"$$\\rho _{X,Y} = \\frac{\\text{cov}(X,Y)}{\\sigma _X \\sigma _Y}$$ (Eq. 28) ",
"Pearson cannot accurately judge a metric's performance for sentences of very similar quality, i.e., in the extreme case of rating outputs of identical quality, the correlation is either not defined or 0, caused by noise of the evaluation model. Thus, we additionally evaluate using mean squared error (MSE), which is defined as the squares of residuals after a linear transformation, divided by the sample size: ",
"$$\\text{MSE}_{X,Y} = \\underset{f}{\\min }\\frac{1}{|X|}\\sum \\limits _{i = 1}^{|X|}{(f(x_i) - y_i)^2}$$ (Eq. 30) ",
"with $f$ being a linear function. Note that, since MSE is invariant to linear transformations of $X$ but not of $Y$ , it is a non-symmetric quasi-metric. We apply it with $Y$ being the human ratings. An additional advantage as compared to Pearson is that it has an interpretable meaning: the expected error made by a given metric as compared to the human rating."
],
[
"As shown in Table 3 , WordSLOR and WPSLOR correlate best with human judgments: WordSLOR (respectively WPSLOR) has a $0.025$ (respectively $0.008$ ) higher Pearson correlation than the best word-overlap metric ROUGE-L-mult, even though the latter requires multiple reference compressions. Furthermore, if we consider with ROUGE-L-single a setting with a single given reference, the distance to WordSLOR increases to $0.048$ for Pearson correlation. Note that, since having a single reference is very common, this result is highly relevant for practical applications. Considering MSE, the top two metrics are still WordSLOR and WPSLOR, with a $0.008$ and, respectively, $0.002$ lower error than the third best metric, ROUGE-L-mult. ",
"Comparing WordSLOR and WPSLOR, we find no significant differences: $0.017$ for Pearson and $0.006$ for MSE. However, WPSLOR uses a more compact LM and, hence, has a shorter training time, since the vocabulary is smaller ( $16,000$ vs. $128,000$ tokens).",
"Next, we find that WordNCE and WPNCE perform roughly on par with word-overlap metrics. This is interesting, since they, in contrast to traditional metrics, do not require reference compressions. However, their correlation with human fluency judgments is strictly lower than that of their respective SLOR counterparts. The difference between WordSLOR and WordNCE is bigger than that between WPSLOR and WPNCE. This might be due to accounting for differences in frequencies being more important for words than for WordPieces. Both WordPPL and WPPPL clearly underperform as compared to all other metrics in our experiments.",
"The traditional word-overlap metrics all perform similarly. ROUGE-L-mult and LR2-F-mult are best and worst, respectively.",
"Results are shown in Table 7 . First, we can see that using SVR (line 1) to combine ROUGE-L-mult and WPSLOR outperforms both individual scores (lines 3-4) by a large margin. This serves as a proof of concept: the information contained in the two approaches is indeed complementary.",
"Next, we consider the setting where only references and no annotated examples are available. In contrast to SVR (line 1), ROUGE-LM (line 2) has only the same requirements as conventional word-overlap metrics (besides a large corpus for training the LM, which is easy to obtain for most languages). Thus, it can be used in the same settings as other word-overlap metrics. Since ROUGE-LM—an uninformed combination—performs significantly better than both ROUGE-L-mult and WPSLOR on their own, it should be the metric of choice for evaluating fluency with given references."
],
[
"The results per compression system (cf. Table 4 ) look different from the correlations in Table 3 : Pearson and MSE are both lower. This is due to the outputs of each given system being of comparable quality. Therefore, the datapoints are similar and, thus, easier to fit for the linear function used for MSE. Pearson, in contrast, is lower due to its invariance to linear transformations of both variables. Note that this effect is smallest for ILP, which has uniformly distributed targets ( $\\text{Var}(Y) = 0.35$ vs. $\\text{Var}(Y) = 0.17$ for SEQ2SEQ).",
"Comparing the metrics, the two SLOR approaches perform best for SEQ2SEQ and T3. In particular, they outperform the best word-overlap metric baseline by $0.244$ and $0.097$ Pearson correlation as well as $0.012$ and $0.012$ MSE, respectively. Since T3 is an abstractive system, we can conclude that WordSLOR and WPSLOR are applicable even for systems that are not limited to make use of a fixed repertoire of words.",
"For ILP and NAMAS, word-overlap metrics obtain best results. The differences in performance, however, are with a maximum difference of $0.072$ for Pearson and ILP much smaller than for SEQ2SEQ. Thus, while the differences are significant, word-overlap metrics do not outperform our SLOR approaches by a wide margin. Recall, additionally, that word-overlap metrics rely on references being available, while our proposed approaches do not require this."
],
[
"Looking next at the correlations for all models but different domains (cf. Table 5 ), we first observe that the results across domains are similar, i.e., we do not observe the same effect as in Subsection \"Analysis I: Fluency Evaluation per Compression System\" . This is due to the distributions of scores being uniform ( $\\text{Var}(Y) \\in [0.28, 0.36]$ ).",
"Next, we focus on an important question: How much does the performance of our SLOR-based metrics depend on the domain, given that the respective LMs are trained on Gigaword, which consists of news data?",
"Comparing the evaluation performance for individual metrics, we observe that, except for letters, WordSLOR and WPSLOR perform best across all domains: they outperform the best word-overlap metric by at least $0.019$ and at most $0.051$ Pearson correlation, and at least $0.004$ and at most $0.014$ MSE. The biggest difference in correlation is achieved for the journal domain. Thus, clearly even LMs which have been trained on out-of-domain data obtain competitive performance for fluency evaluation. However, a domain-specific LM might additionally improve the metrics' correlation with human judgments. We leave a more detailed analysis of the importance of the training data's domain for future work."
],
[
"ROUGE was shown to correlate well with ratings of a generated text's content or meaning at the sentence level BIBREF2 . We further expect content and fluency ratings to be correlated. In fact, sometimes it is difficult to distinguish which one is problematic: to illustrate this, we show some extreme examples—compressions which got the highest fluency rating and the lowest possible content rating by at least one rater, but the lowest fluency score and the highest content score by another—in Table 6 . We, thus, hypothesize that ROUGE should contain information about fluency which is complementary to SLOR, and want to make use of references for fluency evaluation, if available. In this section, we experiment with two reference-based metrics – one trainable one, and one that can be used without fluency annotations, i.e., in the same settings as pure word-overlap metrics."
],
[
"First, we assume a setting in which we have the following available: (i) system outputs whose fluency is to be evaluated, (ii) reference generations for evaluating system outputs, (iii) a small set of system outputs with references, which has been annotated for fluency by human raters, and (iv) a large unlabeled corpus for training a LM. Note that available fluency annotations are often uncommon in real-world scenarios; the reason we use them is that they allow for a proof of concept. In this setting, we train scikit's BIBREF18 support vector regression model (SVR) with the default parameters on predicting fluency, given WPSLOR and ROUGE-L-mult. We use 500 of our total 2955 examples for each of training and development, and the remaining 1955 for testing.",
"Second, we simulate a setting in which we have only access to (i) system outputs which should be evaluated on fluency, (ii) reference compressions, and (iii) large amounts of unlabeled text. In particular, we assume to not have fluency ratings for system outputs, which makes training a regression model impossible. Note that this is the standard setting in which word-overlap metrics are applied. Under these conditions, we propose to normalize both given scores by mean and variance, and to simply add them up. We call this new reference-based metric ROUGE-LM. In order to make this second experiment comparable to the SVR-based one, we use the same 1955 test examples."
],
[
"Fluency evaluation is related to grammatical error detection BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 and grammatical error correction BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 . However, it differs from those in several aspects; most importantly, it is concerned with the degree to which errors matter to humans.",
"Work on automatic fluency evaluation in NLP has been rare. heilman2014predicting predicted the fluency (which they called grammaticality) of sentences written by English language learners. In contrast to ours, their approach is supervised. stent2005evaluating and cahill2009correlating found only low correlation between automatic metrics and fluency ratings for system-generated English paraphrases and the output of a German surface realiser, respectively. Explicit fluency evaluation of NLG, including compression and the related task of summarization, has mostly been performed manually. vadlapudi-katragadda:2010:SRW used LMs for the evaluation of summarization fluency, but their models were based on part-of-speech tags, which we do not require, and they were non-neural. Further, they evaluated longer texts, not single sentences like we do. toutanova2016dataset compared 80 word-overlap metrics for evaluating the content and fluency of compressions, finding only low correlation with the latter. However, they did not propose an alternative evaluation. We aim at closing this gap."
],
[
"Automatic compression evaluation has mostly had a strong focus on content. Hence, word-overlap metrics like ROUGE BIBREF1 have been widely used for compression evaluation. However, they have certain shortcomings, e.g., they correlate best for extractive compression, while we, in contrast, are interested in an approach which generalizes to abstractive systems. Alternatives include success rate BIBREF28 , simple accuracy BIBREF29 , which is based on the edit distance between the generation and the reference, or word accuracy BIBREF30 , the equivalent for multiple references."
],
[
"In the sense that we promote an explicit evaluation of fluency, our work is in line with previous criticism of evaluating NLG tasks with a single score produced by word-overlap metrics.",
"The need for better evaluation for machine translation (MT) was expressed, e.g., by callison2006re, who doubted the meaningfulness of BLEU, and claimed that a higher BLEU score was neither a necessary precondition nor a proof of improved translation quality. Similarly, song2013bleu discussed BLEU being unreliable at the sentence or sub-sentence level (in contrast to the system-level), or for only one single reference. This was supported by isabelle-cherry-foster:2017:EMNLP2017, who proposed a so-called challenge set approach as an alternative. graham-EtAl:2016:COLING performed a large-scale evaluation of human-targeted metrics for machine translation, which can be seen as a compromise between human evaluation and fully automatic metrics. They also found fully automatic metrics to correlate only weakly or moderately with human judgments. bojar2016ten further confirmed that automatic MT evaluation methods do not perform well with a single reference. The need of better metrics for MT has been addressed since 2008 in the WMT metrics shared task BIBREF31 , BIBREF32 .",
"For unsupervised dialogue generation, liu-EtAl:2016:EMNLP20163 obtained close to no correlation with human judgements for BLEU, ROUGE and METEOR. They contributed this in a large part to the unrestrictedness of dialogue answers, which makes it hard to match given references. They emphasized that the community should move away from these metrics for dialogue generation tasks, and develop metrics that correlate more strongly with human judgments. elliott-keller:2014:P14-2 reported the same for BLEU and image caption generation. duvsek2017referenceless suggested an RNN to evaluate NLG at the utterance level, given only the input meaning representation."
],
[
"We empirically confirmed the effectiveness of SLOR, a LM score which accounts for the effects of sentence length and individual unigram probabilities, as a metric for fluency evaluation of the NLG task of automatic compression at the sentence level. We further introduced WPSLOR, an adaptation of SLOR to WordPieces, which reduced both model size and training time at a similar evaluation performance. Our experiments showed that our proposed referenceless metrics correlate significantly better with fluency ratings for the outputs of compression systems than traditional word-overlap metrics on a benchmark dataset. Additionally, they can be applied even in settings where no references are available, or would be costly to obtain. Finally, for given references, we proposed the reference-based metric ROUGE-LM, which consists of a combination of WPSLOR and ROUGE. Thus, we were able to obtain an even more accurate fluency evaluation."
],
[
"We would like to thank Sebastian Ebert and Samuel Bowman for their detailed and helpful feedback."
]
]
} | {
"question": [
"Is ROUGE their only baseline?",
"what language models do they use?",
"what questions do they ask human judges?"
],
"question_id": [
"7aa8375cdf4690fc3b9b1799b0f5a9ec1c1736ed",
"3ac30bd7476d759ea5d9a5abf696d4dfc480175b",
"0e57a0983b4731eba9470ba964d131045c8c7ea7"
],
"nlp_background": [
"two",
"two",
"two"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"yes",
"yes",
"yes"
],
"search_query": [
"social",
"social",
"social"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks. ROUGE-L measures the similarity of two sentences based on their longest common subsequence. Generated and reference compressions are tokenized and lowercased. For multiple references, we only make use of the one with the highest score for each example.",
"We compare to the best n-gram-overlap metrics from toutanova2016dataset; combinations of linguistic units (bi-grams (LR2) and tri-grams (LR3)) and scoring measures (recall (R) and F-score (F)). With multiple references, we consider the union of the sets of n-grams. Again, generated and reference compressions are tokenized and lowercased.",
"We further compare to the negative LM cross-entropy, i.e., the log-probability which is only normalized by sentence length. The score of a sentence $S$ is calculated as",
"Our next baseline is perplexity, which corresponds to the exponentiated cross-entropy:",
"Due to its popularity, we also performed initial experiments with BLEU BIBREF17 . Its correlation with human scores was so low that we do not consider it in our final experiments."
],
"highlighted_evidence": [
"Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks.",
"We compare to the best n-gram-overlap metrics from toutanova2016dataset;",
"We further compare to the negative LM cross-entropy, i.e., the log-probability which is only normalized by sentence length.",
"Our next baseline is perplexity, which corresponds to the exponentiated cross-entropy:",
"Due to its popularity, we also performed initial experiments with BLEU BIBREF17 . "
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "No, other baseline metrics they use besides ROUGE-L are n-gram overlap, negative cross-entropy, perplexity, and BLEU.",
"evidence": [
"Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks. ROUGE-L measures the similarity of two sentences based on their longest common subsequence. Generated and reference compressions are tokenized and lowercased. For multiple references, we only make use of the one with the highest score for each example.",
"We compare to the best n-gram-overlap metrics from toutanova2016dataset; combinations of linguistic units (bi-grams (LR2) and tri-grams (LR3)) and scoring measures (recall (R) and F-score (F)). With multiple references, we consider the union of the sets of n-grams. Again, generated and reference compressions are tokenized and lowercased.",
"We further compare to the negative LM cross-entropy, i.e., the log-probability which is only normalized by sentence length. The score of a sentence $S$ is calculated as",
"Our next baseline is perplexity, which corresponds to the exponentiated cross-entropy:",
"Due to its popularity, we also performed initial experiments with BLEU BIBREF17 . Its correlation with human scores was so low that we do not consider it in our final experiments."
],
"highlighted_evidence": [
"Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks.",
"We compare to the best n-gram-overlap metrics from toutanova2016dataset;",
"We further compare to the negative LM cross-entropy",
"Our next baseline is perplexity, ",
"Due to its popularity, we also performed initial experiments with BLEU BIBREF17 "
]
}
],
"annotation_id": [
"24ebf6cd50b3f873f013cd206aa999a4aa841317",
"d04c757c5a09e8a9f537d15bdd93ac4043c7a3e9"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"LSTM LMs"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We calculate the probability of a sentence with a long-short term memory (LSTM, hochreiter1997long) LM, i.e., a special type of RNN LM, which has been trained on a large corpus. More details on LSTM LMs can be found, e.g., in sundermeyer2012lstm. The unigram probabilities for SLOR are estimated using the same corpus.",
"We train our LSTM LMs on the English Gigaword corpus BIBREF15 , which consists of news data."
],
"highlighted_evidence": [
"We calculate the probability of a sentence with a long-short term memory (LSTM, hochreiter1997long) LM, i.e., a special type of RNN LM, which has been trained on a large corpus.",
"We train our LSTM LMs on the English Gigaword corpus BIBREF15 , which consists of news data."
]
}
],
"annotation_id": [
"5ecd71796a1b58d848a20b0fe4be06ee50ea40fb"
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"3fd01f74c49811127a1014b99a0681072e1ec34d"
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
}
]
} | {
"caption": [
"Table 1: Example compressions from our dataset with their fluency scores; scores in [1, 3], higher is better.",
"Table 2: Average fluency ratings for each compression system in the dataset by Toutanova et al. (2016).",
"Table 3: Pearson correlation (higher is better) and MSE (lower is better) for all metrics; best results in bold; refs=number of references used to compute the metric.",
"Table 4: Pearson correlation (higher is better) and MSE (lower is better), reported by compression system; best results in bold; refs=number of references used to compute the metric.",
"Table 5: Pearson correlation (higher is better) and MSE (lower is better), reported by domain of the original sentence or paragraph; best results in bold; refs=number of references used to compute the metric.",
"Table 6: Sentences for which raters were unsure if they were perceived as problematic due to fluency or content issues, together with the model which generated them.",
"Table 7: Combinations; all differences except for 3 and 4 are statistically significant; refs=number of references used to compute the metric; ROUGE=ROUGE-L-mult; best results in bold."
],
"file": [
"1-Table1-1.png",
"3-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"7-Table5-1.png",
"7-Table6-1.png",
"7-Table7-1.png"
]
} |
1707.00995 | An empirical study on the effectiveness of images in Multimodal Neural Machine Translation | In state-of-the-art Neural Machine Translation (NMT), an attention mechanism is used during decoding to enhance the translation. At every step, the decoder uses this mechanism to focus on different parts of the source sentence to gather the most useful information before outputting its target word. Recently, the effectiveness of the attention mechanism has also been explored for multimodal tasks, where it becomes possible to focus both on sentence parts and image regions that they describe. In this paper, we compare several attention mechanism on the multimodal translation task (English, image to German) and evaluate the ability of the model to make use of images to improve translation. We surpass state-of-the-art scores on the Multi30k data set, we nevertheless identify and report different misbehavior of the machine while translating. | {
"section_name": [
"Introduction",
"Neural Machine Translation",
"Text-based NMT",
"Conditional GRU",
"Multimodal NMT",
"Attention-based Models",
"Soft attention",
"Hard Stochastic attention",
"Local Attention",
"Image attention optimization",
"Experiments",
"Training and model details",
"Quantitative results",
"Qualitative results",
"Conclusion and future work",
"Acknowledgements"
],
"paragraphs": [
[
"In machine translation, neural networks have attracted a lot of research attention. Recently, the attention-based encoder-decoder framework BIBREF0 , BIBREF1 has been largely adopted. In this approach, Recurrent Neural Networks (RNNs) map source sequences of words to target sequences. The attention mechanism is learned to focus on different parts of the input sentence while decoding. Attention mechanisms have shown to work with other modalities too, like images, where their are able to learn to attend the salient parts of an image, for instance when generating text captions BIBREF2 . For such applications, Convolutional Neural Networks (CNNs) such as Deep Residual BIBREF3 have shown to work best to represent images.",
"Multimodal models of texts and images empower new applications such as visual question answering or multimodal caption translation. Also, the grounding of multiple modalities against each other may enable the model to have a better understanding of each modality individually, such as in natural language understanding applications.",
"In the field of Machine Translation (MT), the efficient integration of multimodal information still remains a challenging task. It requires combining diverse modality vector representations with each other. These vector representations, also called context vectors, are computed in order the capture the most relevant information in a modality to output the best translation of a sentence.",
"To investigate the effectiveness of information obtained from images, a multimodal machine translation shared task BIBREF4 has been addressed to the MT community. The best results of NMT model were those of BIBREF5 huang2016attention who used LSTM fed with global visual features or multiple regional visual features followed by rescoring. Recently, BIBREF6 CalixtoLC17b proposed a doubly-attentive decoder that outperformed this baseline with less data and without rescoring.",
"Our paper is structured as follows. In section SECREF2 , we briefly describe our NMT model as well as the conditional GRU activation used in the decoder. We also explain how multi-modalities can be implemented within this framework. In the following sections ( SECREF3 and SECREF4 ), we detail three attention mechanisms and explain how we tweak them to work as well as possible with images. Finally, we report and analyze our results in section SECREF5 then conclude in section SECREF6 ."
],
[
"In this section, we detail the neural machine translation architecture by BIBREF1 BahdanauCB14, implemented as an attention-based encoder-decoder framework with recurrent neural networks (§ SECREF2 ). We follow by explaining the conditional GRU layer (§ SECREF8 ) - the gating mechanism we chose for our RNN - and how the model can be ported to a multimodal version (§ SECREF13 )."
],
[
"Given a source sentence INLINEFORM0 , the neural network directly models the conditional probability INLINEFORM1 of its translation INLINEFORM2 . The network consists of one encoder and one decoder with one attention mechanism. The encoder computes a representation INLINEFORM3 for each source sentence and a decoder generates one target word at a time and by decomposing the following conditional probability : DISPLAYFORM0 ",
"Each source word INLINEFORM0 and target word INLINEFORM1 are a column index of the embedding matrix INLINEFORM2 and INLINEFORM3 . The encoder is a bi-directional RNN with Gated Recurrent Unit (GRU) layers BIBREF7 , BIBREF8 , where a forward RNN INLINEFORM4 reads the input sequence as it is ordered (from INLINEFORM5 to INLINEFORM6 ) and calculates a sequence of forward hidden states INLINEFORM7 . A backward RNN INLINEFORM8 reads the sequence in the reverse order (from INLINEFORM9 to INLINEFORM10 ), resulting in a sequence of backward hidden states INLINEFORM11 . We obtain an annotation for each word INLINEFORM12 by concatenating the forward and backward hidden state INLINEFORM13 . Each annotation INLINEFORM14 contains the summaries of both the preceding words and the following words. The representation INLINEFORM15 for each source sentence is the sequence of annotations INLINEFORM16 .",
"The decoder is an RNN that uses a conditional GRU (cGRU, more details in § SECREF8 ) with an attention mechanism to generate a word INLINEFORM0 at each time-step INLINEFORM1 . The cGRU uses it's previous hidden state INLINEFORM2 , the whole sequence of source annotations INLINEFORM3 and the previously decoded symbol INLINEFORM4 in order to update it's hidden state INLINEFORM5 : DISPLAYFORM0 ",
"In the process, the cGRU also computes a time-dependent context vector INLINEFORM0 . Both INLINEFORM1 and INLINEFORM2 are further used to decode the next symbol. We use a deep output layer BIBREF9 to compute a vocabulary-sized vector : DISPLAYFORM0 ",
"where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 are model parameters. We can parameterize the probability of decoding each word INLINEFORM4 as: DISPLAYFORM0 ",
"The initial state of the decoder INLINEFORM0 at time-step INLINEFORM1 is initialized by the following equation : DISPLAYFORM0 ",
"where INLINEFORM0 is a feedforward network with one hidden layer."
],
[
"The conditional GRU consists of two stacked GRU activations called INLINEFORM0 and INLINEFORM1 and an attention mechanism INLINEFORM2 in between (called ATT in the footnote paper). At each time-step INLINEFORM3 , REC1 firstly computes a hidden state proposal INLINEFORM4 based on the previous hidden state INLINEFORM5 and the previously emitted word INLINEFORM6 : DISPLAYFORM0 ",
" Then, the attention mechanism computes INLINEFORM0 over the source sentence using the annotations sequence INLINEFORM1 and the intermediate hidden state proposal INLINEFORM2 : DISPLAYFORM0 ",
"Finally, the second recurrent cell INLINEFORM0 , computes the hidden state INLINEFORM1 of the INLINEFORM2 by looking at the intermediate representation INLINEFORM3 and context vector INLINEFORM4 : DISPLAYFORM0 "
],
[
"Recently, BIBREF6 CalixtoLC17b proposed a doubly attentive decoder (referred as the \"MNMT\" model in the author's paper) which can be seen as an expansion of the attention-based NMT model proposed in the previous section. Given a sequence of second a modality annotations INLINEFORM0 , we also compute a new context vector based on the same intermediate hidden state proposal INLINEFORM1 : DISPLAYFORM0 ",
"This new time-dependent context vector is an additional input to a modified version of REC2 which now computes the final hidden state INLINEFORM0 using the intermediate hidden state proposal INLINEFORM1 and both time-dependent context vectors INLINEFORM2 and INLINEFORM3 : DISPLAYFORM0 ",
" The probabilities for the next target word (from equation EQREF5 ) also takes into account the new context vector INLINEFORM0 : DISPLAYFORM0 ",
"where INLINEFORM0 is a new trainable parameter.",
"In the field of multimodal NMT, the second modality is usually an image computed into feature maps with the help of a CNN. The annotations INLINEFORM0 are spatial features (i.e. each annotation represents features for a specific region in the image) . We follow the same protocol for our experiments and describe it in section SECREF5 ."
],
[
"We evaluate three models of the image attention mechanism INLINEFORM0 of equation EQREF11 . They have in common the fact that at each time step INLINEFORM1 of the decoding phase, all approaches first take as input the annotation sequence INLINEFORM2 to derive a time-dependent context vector that contain relevant information in the image to help predict the current target word INLINEFORM3 . Even though these models differ in how the time-dependent context vector is derived, they share the same subsequent steps. For each mechanism, we propose two hand-picked illustrations showing where the attention is placed in an image."
],
[
"Soft attention has firstly been used for syntactic constituency parsing by BIBREF10 NIPS2015Vinyals but has been widely used for translation tasks ever since. One should note that it slightly differs from BIBREF1 BahdanauCB14 where their attention takes as input the previous decoder hidden state instead of the current (intermediate) one as shown in equation EQREF11 . This mechanism has also been successfully investigated for the task of image description generation BIBREF2 where a model generates an image's description in natural language. It has been used in multimodal translation as well BIBREF6 , for which it constitutes a state-of-the-art.",
"The idea of the soft attentional model is to consider all the annotations when deriving the context vector INLINEFORM0 . It consists of a single feed-forward network used to compute an expected alignment INLINEFORM1 between modality annotation INLINEFORM2 and the target word to be emitted at the current time step INLINEFORM3 . The inputs are the modality annotations and the intermediate representation of REC1 INLINEFORM4 : DISPLAYFORM0 ",
"The vector INLINEFORM0 has length INLINEFORM1 and its INLINEFORM2 -th item contains a score of how much attention should be put on the INLINEFORM3 -th annotation in order to output the best word at time INLINEFORM4 . We compute normalized scores to create an attention mask INLINEFORM5 over annotations: DISPLAYFORM0 ",
" Finally, the modality time-dependent context vector INLINEFORM0 is computed as a weighted sum over the annotation vectors (equation ). In the above expressions, INLINEFORM1 , INLINEFORM2 and INLINEFORM3 are trained parameters."
],
[
"This model is a stochastic and sampling-based process where, at every timestep INLINEFORM0 , we are making a hard choice to attend only one annotation. This corresponds to one spatial location in the image. Hard attention has previously been used in the context of object recognition BIBREF11 , BIBREF12 and later extended to image description generation BIBREF2 . In the context of multimodal NMT, we can follow BIBREF2 icml2015xuc15 because both our models involve the same process on images.",
"The mechanism INLINEFORM0 is now a function that returns a sampled intermediate latent variables INLINEFORM1 based upon a multinouilli distribution parameterized by INLINEFORM2 : DISPLAYFORM0 ",
"where INLINEFORM0 an indicator one-hot variable which is set to 1 if the INLINEFORM1 -th annotation (out of INLINEFORM2 ) is the one used to compute the context vector INLINEFORM3 : DISPLAYFORM0 ",
" Context vector INLINEFORM0 is now seen as the random variable of this distribution. We define the variational lower bound INLINEFORM1 on the marginal log evidence INLINEFORM2 of observing the target sentence INLINEFORM3 given modality annotations INLINEFORM4 . DISPLAYFORM0 ",
"The learning rules can be derived by taking derivatives of the above variational free energy INLINEFORM0 with respect to the model parameter INLINEFORM1 : DISPLAYFORM0 ",
"In order to propagate a gradient through this process, the summation in equation EQREF26 can then be approximated using Monte Carlo based sampling defined by equation EQREF24 : DISPLAYFORM0 ",
"To reduce variance of the estimator in equation EQREF27 , we use a moving average baseline estimated as an accumulated sum of the previous log likelihoods with exponential decay upon seeing the INLINEFORM0 -th mini-batch: DISPLAYFORM0 "
],
[
"In this section, we propose a local attentional mechanism that chooses to focus only on a small subset of the image annotations. Local Attention has been used for text-based translation BIBREF13 and is inspired by the selective attention model of BIBREF14 gregor15 for image generation. Their approach allows the model to select an image patch of varying location and zoom. Local attention uses instead the same \"zoom\" for all target positions and still achieved good performance. This model can be seen as a trade-off between the soft and hard attentional models. The model picks one patch in the annotation sequence (one spatial location) and selectively focuses on a small window of context around it. Even though an image can't be seen as a temporal sequence, we still hope that the model finds points of interest and selects the useful information around it. This approach has an advantage of being differentiable whereas the stochastic attention requires more complicated techniques such as variance reduction and reinforcement learning to train as shown in section SECREF22 . The soft attention has the drawback to attend the whole image which can be difficult to learn, especially because the number of annotations INLINEFORM0 is usually large (presumably to keep a significant spatial granularity).",
"More formally, at every decoding step INLINEFORM0 , the model first generates an aligned position INLINEFORM1 . Context vector INLINEFORM2 is derived as a weighted sum over the annotations within the window INLINEFORM3 where INLINEFORM4 is a fixed model parameter chosen empirically. These selected annotations correspond to a squared region in the attention maps around INLINEFORM7 . The attention mask INLINEFORM8 is of size INLINEFORM9 . The model predicts INLINEFORM10 as an aligned position in the annotation sequence (referred as Predictive alignment (local-m) in the author's paper) according to the following equation: DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are both trainable model parameters and INLINEFORM2 is the annotation sequence length INLINEFORM3 . Because of the sigmoid, INLINEFORM4 . We use equation EQREF18 and EQREF19 respectively to compute the expected alignment vector INLINEFORM5 and the attention mask INLINEFORM6 . In addition, a Gaussian distribution centered around INLINEFORM7 is placed on the alphas in order to favor annotations near INLINEFORM8 : DISPLAYFORM0 ",
"where standard deviation INLINEFORM0 . We obtain context vector INLINEFORM1 by following equation ."
],
[
"Three optimizations can be added to the attention mechanism regarding the image modality. All lead to a better use of the image by the model and improved the translation scores overall.",
"At every decoding step INLINEFORM0 , we compute a gating scalar INLINEFORM1 according to the previous decoder state INLINEFORM2 : DISPLAYFORM0 ",
"It is then used to compute the time-dependent image context vector : DISPLAYFORM0 ",
" BIBREF2 icml2015xuc15 empirically found it to put more emphasis on the objects in the image descriptions generated with their model.",
"We also double the output size of trainable parameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 in equation EQREF18 when it comes to compute the expected annotations over the image annotation sequence. More formally, given the image annotation sequence INLINEFORM3 , the tree matrices are of size INLINEFORM4 , INLINEFORM5 and INLINEFORM6 respectively. We noticed a better coverage of the objects in the image by the alpha weights.",
"Lastly, we use a grounding attention inspired by BIBREF15 delbrouck2017multimodal. The mechanism merge each spatial location INLINEFORM0 in the annotation sequence INLINEFORM1 with the initial decoder state INLINEFORM2 obtained in equation EQREF7 with non-linearity : DISPLAYFORM0 ",
" where INLINEFORM0 is INLINEFORM1 function. The new annotations go through a L2 normalization layer followed by two INLINEFORM2 convolutional layers (of size INLINEFORM3 respectively) to obtain INLINEFORM4 weights, one for each spatial location. We normalize the weights with a softmax to obtain a soft attention map INLINEFORM5 . Each annotation INLINEFORM6 is then weighted according to its corresponding INLINEFORM7 : DISPLAYFORM0 ",
" This method can be seen as the removal of unnecessary information in the image annotations according to the source sentence. This attention is used on top of the others - before decoding - and is referred as \"grounded image\" in Table TABREF41 ."
],
[
"For this experiments on Multimodal Machine Translation, we used the Multi30K dataset BIBREF17 which is an extended version of the Flickr30K Entities. For each image, one of the English descriptions was selected and manually translated into German by a professional translator. As training and development data, 29,000 and 1,014 triples are used respectively. A test set of size 1000 is used for metrics evaluation."
],
[
"All our models are build on top of the nematus framework BIBREF18 . The encoder is a bidirectional RNN with GRU, one 1024D single-layer forward and one 1024D single-layer backward RNN. Word embeddings for source and target language are of 620D and trained jointly with the model. Word embeddings and other non-recurrent matrices are initialized by sampling from a Gaussian INLINEFORM0 , recurrent matrices are random orthogonal and bias vectors are all initialized to zero.",
"To create the image annotations used by our decoder, we used a ResNet-50 pre-trained on ImageNet and extracted the features of size INLINEFORM0 at its res4f layer BIBREF3 . In our experiments, our decoder operates on the flattened 196 INLINEFORM1 1024 (i.e INLINEFORM2 ). We also apply dropout with a probability of 0.5 on the embeddings, on the hidden states in the bidirectional RNN in the encoder as well as in the decoder. In the decoder, we also apply dropout on the text annotations INLINEFORM3 , the image features INLINEFORM4 , on both modality context vector and on all components of the deep output layer before the readout operation. We apply dropout using one same mask in all time steps BIBREF19 .",
"We also normalize and tokenize English and German descriptions using the Moses tokenizer scripts BIBREF20 . We use the byte pair encoding algorithm on the train set to convert space-separated tokens into subwords BIBREF21 , reducing our vocabulary size to 9226 and 14957 words for English and German respectively.",
"All variants of our attention model were trained with ADADELTA BIBREF22 , with mini-batches of size 80 for our monomodal (text-only) NMT model and 40 for our multimodal NMT. We apply early stopping for model selection based on BLEU4 : training is halted if no improvement on the development set is observed for more than 20 epochs. We use the metrics BLEU4 BIBREF23 , METEOR BIBREF24 and TER BIBREF25 to evaluate the quality of our models' translations."
],
[
"We notice a nice overall progress over BIBREF6 CalixtoLC17b multimodal baseline, especially when using the stochastic attention. With improvements of +1.51 BLEU and -2.2 TER on both precision-oriented metrics, the model shows a strong similarity of the n-grams of our candidate translations with respect to the references. The more recall-oriented metrics METEOR scores are roughly the same across our models which is expected because all attention mechanisms share the same subsequent step at every time-step INLINEFORM0 , i.e. taking into account the attention weights of previous time-step INLINEFORM1 in order to compute the new intermediate hidden state proposal and therefore the new context vector INLINEFORM2 . Again, the largest improvement is given by the hard stochastic attention mechanism (+0.4 METEOR): because it is modeled as a decision process according to the previous choices, this may reinforce the idea of recall. We also remark interesting improvements when using the grounded mechanism, especially for the soft attention. The soft attention may benefit more of the grounded image because of the wide range of spatial locations it looks at, especially compared to the stochastic attention. This motivates us to dig into more complex grounding techniques in order to give the machine a deeper understanding of the modalities.",
"Note that even though our baseline NMT model is basically the same as BIBREF6 CalixtoLC17b, our experiments results are slightly better. This is probably due to the different use of dropout and subwords. We also compared our results to BIBREF16 caglayan2016does because our multimodal models are nearly identical with the major exception of the gating scalar (cfr. section SECREF4 ). This motivated some of our qualitative analysis and hesitation towards the current architecture in the next section."
],
[
"For space-saving and ergonomic reasons, we only discuss about the hard stochastic and soft attention, the latter being a generalization of the local attention.",
"As we can see in Figure FIGREF44 , the soft attention model is looking roughly at the same region of the image for every decoding step INLINEFORM0 . Because the words \"hund\"(dog), \"wald\"(forest) or \"weg\"(way) in left image are objects, they benefit from a high gating scalar. As a matter of fact, the attention mechanism has learned to detect the objects within a scene (at every time-step, whichever word we are decoding as shown in the right image) and the gating scalar has learned to decide whether or not we have to look at the picture (or more accurately whether or not we are translating an object). Without this scalar, the translation scores undergo a massive drop (as seen in BIBREF16 caglayan2016does) which means that the attention mechanisms don't really understand the more complex relationships between objects, what is really happening in the scene. Surprisingly, the gating scalar happens to be really low in the stochastic attention mechanism: a significant amount of sentences don't have a summed gating scalar INLINEFORM1 0.10. The model totally discards the image in the translation process.",
"It is also worth to mention that we use a ResNet trained on 1.28 million images for a classification tasks. The features used by the attention mechanism are strongly object-oriented and the machine could miss important information for a multimodal translation task. We believe that the robust architecture of both encoders INLINEFORM0 combined with a GRU layer and word-embeddings took care of the right translation for relationships between objects and time-dependencies. Yet, we noticed a common misbehavior for all our multimodal models: if the attention loose track of the objects in the picture and \"gets lost\", the model still takes it into account and somehow overrides the information brought by the text-based annotations. The translation is then totally mislead. We illustrate with an example:",
"The monomodal translation has a sentence-level BLEU of 82.16 whilst the soft attention and hard stochastic attention scores are of 16.82 and 34.45 respectively. Figure FIGREF47 shows the attention maps for both mechanism. Nevertheless, one has to concede that the use of images indubitably helps the translation as shown in the score tabular."
],
[
"We have tried different attention mechanism and tweaks for the image modality. We showed improvements and encouraging results overall on the Flickr30K Entities dataset. Even though we identified some flaws of the current attention mechanisms, we can conclude pretty safely that images are an helpful resource for the machine in a translation task. We are looking forward to try out richer and more suitable features for multimodal translation (ie. dense captioning features). Another interesting approach would be to use visually grounded word embeddings to capture visual notions of semantic relatedness."
],
[
"This work was partly supported by the Chist-Era project IGLU with contribution from the Belgian Fonds de la Recherche Scientique (FNRS), contract no. R.50.11.15.F, and by the FSO project VCYCLE with contribution from the Belgian Waloon Region, contract no. 1510501."
]
]
} | {
"question": [
"What misbehavior is identified?",
"What is the baseline used?",
"Which attention mechanisms do they compare?"
],
"question_id": [
"f0317e48dafe117829e88e54ed2edab24b86edb1",
"ec91b87c3f45df050e4e16018d2bf5b62e4ca298",
"f129c97a81d81d32633c94111018880a7ffe16d1"
],
"nlp_background": [
"",
"",
""
],
"topic_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"search_query": [
"",
"",
""
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"if the attention loose track of the objects in the picture and \"gets lost\", the model still takes it into account and somehow overrides the information brought by the text-based annotations"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"It is also worth to mention that we use a ResNet trained on 1.28 million images for a classification tasks. The features used by the attention mechanism are strongly object-oriented and the machine could miss important information for a multimodal translation task. We believe that the robust architecture of both encoders INLINEFORM0 combined with a GRU layer and word-embeddings took care of the right translation for relationships between objects and time-dependencies. Yet, we noticed a common misbehavior for all our multimodal models: if the attention loose track of the objects in the picture and \"gets lost\", the model still takes it into account and somehow overrides the information brought by the text-based annotations. The translation is then totally mislead. We illustrate with an example:"
],
"highlighted_evidence": [
"Yet, we noticed a common misbehavior for all our multimodal models: if the attention loose track of the objects in the picture and \"gets lost\", the model still takes it into account and somehow overrides the information brought by the text-based annotations. The translation is then totally mislead. "
]
},
{
"unanswerable": false,
"extractive_spans": [
"if the attention loose track of the objects in the picture and \"gets lost\", the model still takes it into account and somehow overrides the information brought by the text-based annotations"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"It is also worth to mention that we use a ResNet trained on 1.28 million images for a classification tasks. The features used by the attention mechanism are strongly object-oriented and the machine could miss important information for a multimodal translation task. We believe that the robust architecture of both encoders INLINEFORM0 combined with a GRU layer and word-embeddings took care of the right translation for relationships between objects and time-dependencies. Yet, we noticed a common misbehavior for all our multimodal models: if the attention loose track of the objects in the picture and \"gets lost\", the model still takes it into account and somehow overrides the information brought by the text-based annotations. The translation is then totally mislead. We illustrate with an example:"
],
"highlighted_evidence": [
"Yet, we noticed a common misbehavior for all our multimodal models: if the attention loose track of the objects in the picture and \"gets lost\", the model still takes it into account and somehow overrides the information brought by the text-based annotations."
]
}
],
"annotation_id": [
"6b56c74d6c4230b2d8b67b119573deca162fc56c",
"afd879dfc9e111ce674b6fb6f5616aeadd7de942"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"5ae6f4ab839c07f94ec2ed480da5688c23293d70"
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Soft attention",
"Hard Stochastic attention",
"Local Attention"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We evaluate three models of the image attention mechanism INLINEFORM0 of equation EQREF11 . They have in common the fact that at each time step INLINEFORM1 of the decoding phase, all approaches first take as input the annotation sequence INLINEFORM2 to derive a time-dependent context vector that contain relevant information in the image to help predict the current target word INLINEFORM3 . Even though these models differ in how the time-dependent context vector is derived, they share the same subsequent steps. For each mechanism, we propose two hand-picked illustrations showing where the attention is placed in an image.",
"Soft attention",
"Hard Stochastic attention",
"Local Attention"
],
"highlighted_evidence": [
"We evaluate three models of the image attention mechanism INLINEFORM0 of equation EQREF11 .",
"Soft attention",
"Hard Stochastic attention",
"Local Attention"
]
}
],
"annotation_id": [
"2cacad02724b948ee133a8fb9fdf9fcb7218604c"
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
]
} | {
"caption": [
"Figure 1: Die beiden Kinder spielen auf dem Spielplatz .",
"Figure 2: Ein Junge sitzt auf und blickt aus einem Mikroskop .",
"Figure 3: Ein Mann sitzt neben einem Computerbildschirm .",
"Figure 4: Ein Mann in einem orangefarbenen Hemd und mit Helm .",
"Figure 5: Ein Mädchen mit einer Schwimmweste",
"Figure 6: Ein kleiner schwarzer Hund springt über Hindernisse .",
"Table 1: Results on the 1000 test triples of the Multi30K dataset. We pick Calixto et al. (2017) scores as baseline and report our results accordingly (green for improvement and red for deterioration). In each of our experiments, Soft attention is used for text. The comparison is hence with respect to the attention mechanism used for the image modality.",
"Figure 7: Representative figures of the soft-attention behavior discussed in §5.3",
"Figure 8: Wrong detection for both Soft attention (top) and Hard stochastic attention (bottom)"
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"4-Figure3-1.png",
"4-Figure4-1.png",
"5-Figure5-1.png",
"5-Figure6-1.png",
"7-Table1-1.png",
"8-Figure7-1.png",
"8-Figure8-1.png"
]
} |
1809.04960 | Unsupervised Machine Commenting with Neural Variational Topic Model | Article comments can provide supplementary opinions and facts for readers, thereby increase the attraction and engagement of articles. Therefore, automatically commenting is helpful in improving the activeness of the community, such as online forums and news websites. Previous work shows that training an automatic commenting system requires large parallel corpora. Although part of articles are naturally paired with the comments on some websites, most articles and comments are unpaired on the Internet. To fully exploit the unpaired data, we completely remove the need for parallel data and propose a novel unsupervised approach to train an automatic article commenting model, relying on nothing but unpaired articles and comments. Our model is based on a retrieval-based commenting framework, which uses news to retrieve comments based on the similarity of their topics. The topic representation is obtained from a neural variational topic model, which is trained in an unsupervised manner. We evaluate our model on a news comment dataset. Experiments show that our proposed topic-based approach significantly outperforms previous lexicon-based models. The model also profits from paired corpora and achieves state-of-the-art performance under semi-supervised scenarios. | {
"section_name": [
"Introduction",
"Machine Commenting",
"Challenges",
"Solutions",
"Proposed Approach",
"Retrieval-based Commenting",
"Neural Variational Topic Model",
"Training",
"Datasets",
"Implementation Details",
"Baselines",
"Retrieval Evaluation",
"Generative Evaluation",
"Analysis and Discussion",
"Article Comment",
"Topic Model and Variational Auto-Encoder",
"Conclusion"
],
"paragraphs": [
[
"Making article comments is a fundamental ability for an intelligent machine to understand the article and interact with humans. It provides more challenges because commenting requires the abilities of comprehending the article, summarizing the main ideas, mining the opinions, and generating the natural language. Therefore, machine commenting is an important problem faced in building an intelligent and interactive agent. Machine commenting is also useful in improving the activeness of communities, including online forums and news websites. Article comments can provide extended information and external opinions for the readers to have a more comprehensive understanding of the article. Therefore, an article with more informative and interesting comments will attract more attention from readers. Moreover, machine commenting can kick off the discussion about an article or a topic, which helps increase user engagement and interaction between the readers and authors.",
"Because of the advantage and importance described above, more recent studies have focused on building a machine commenting system with neural models BIBREF0 . One bottleneck of neural machine commenting models is the requirement of a large parallel dataset. However, the naturally paired commenting dataset is loosely paired. Qin et al. QinEA2018 were the first to propose the article commenting task and an article-comment dataset. The dataset is crawled from a news website, and they sample 1,610 article-comment pairs to annotate the relevance score between articles and comments. The relevance score ranges from 1 to 5, and we find that only 6.8% of the pairs have an average score greater than 4. It indicates that the naturally paired article-comment dataset contains a lot of loose pairs, which is a potential harm to the supervised models. Besides, most articles and comments are unpaired on the Internet. For example, a lot of articles do not have the corresponding comments on the news websites, and the comments regarding the news are more likely to appear on social media like Twitter. Since comments on social media are more various and recent, it is important to exploit these unpaired data.",
"Another issue is that there is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comment does not always tell the same thing as the corresponding article. Table TABREF1 shows an example of an article and several corresponding comments. The comments do not directly tell what happened in the news, but talk about the underlying topics (e.g. NBA Christmas Day games, LeBron James). However, existing methods for machine commenting do not model the topics of articles, which is a potential harm to the generated comments.",
"To this end, we propose an unsupervised neural topic model to address both problems. For the first problem, we completely remove the need of parallel data and propose a novel unsupervised approach to train a machine commenting system, relying on nothing but unpaired articles and comments. For the second issue, we bridge the articles and comments with their topics. Our model is based on a retrieval-based commenting framework, which uses the news as the query to retrieve the comments by the similarity of their topics. The topic is represented with a variational topic, which is trained in an unsupervised manner.",
"The contributions of this work are as follows:"
],
[
"In this section, we highlight the research challenges of machine commenting, and provide some solutions to deal with these challenges."
],
[
"Here, we first introduce the challenges of building a well-performed machine commenting system.",
"The generative model, such as the popular sequence-to-sequence model, is a direct choice for supervised machine commenting. One can use the title or the content of the article as the encoder input, and the comments as the decoder output. However, we find that the mode collapse problem is severed with the sequence-to-sequence model. Despite the input articles being various, the outputs of the model are very similar. The reason mainly comes from the contradiction between the complex pattern of generating comments and the limited parallel data. In other natural language generation tasks, such as machine translation and text summarization, the target output of these tasks is strongly related to the input, and most of the required information is involved in the input text. However, the comments are often weakly related to the input articles, and part of the information in the comments is external. Therefore, it requires much more paired data for the supervised model to alleviate the mode collapse problem.",
"One article can have multiple correct comments, and these comments can be very semantically different from each other. However, in the training set, there is only a part of the correct comments, so the other correct comments will be falsely regarded as the negative samples by the supervised model. Therefore, many interesting and informative comments will be discouraged or neglected, because they are not paired with the articles in the training set.",
"There is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comments often have some external information, or even tell an opposite opinion from the articles. Therefore, it is difficult to automatically mine the relationship between articles and comments."
],
[
"Facing the above challenges, we provide three solutions to the problems.",
"Given a large set of candidate comments, the retrieval model can select some comments by matching articles with comments. Compared with the generative model, the retrieval model can achieve more promising performance. First, the retrieval model is less likely to suffer from the mode collapse problem. Second, the generated comments are more predictable and controllable (by changing the candidate set). Third, the retrieval model can be combined with the generative model to produce new comments (by adding the outputs of generative models to the candidate set).",
"The unsupervised learning method is also important for machine commenting to alleviate the problems descried above. Unsupervised learning allows the model to exploit more data, which helps the model to learn more complex patterns of commenting and improves the generalization of the model. Many comments provide some unique opinions, but they do not have paired articles. For example, many interesting comments on social media (e.g. Twitter) are about recent news, but require redundant work to match these comments with the corresponding news articles. With the help of the unsupervised learning method, the model can also learn to generate these interesting comments. Additionally, the unsupervised learning method does not require negative samples in the training stage, so that it can alleviate the negative sampling bias.",
"Although there is semantic gap between the articles and the comments, we find that most articles and comments share the same topics. Therefore, it is possible to bridge the semantic gap by modeling the topics of both articles and comments. It is also similar to how humans generate comments. Humans do not need to go through the whole article but are capable of making a comment after capturing the general topics."
],
[
"We now introduce our proposed approach as an implementation of the solutions above. We first give the definition and the denotation of the problem. Then, we introduce the retrieval-based commenting framework. After that, a neural variational topic model is introduced to model the topics of the comments and the articles. Finally, semi-supervised training is used to combine the advantage of both supervised and unsupervised learning."
],
[
"Given an article, the retrieval-based method aims to retrieve a comment from a large pool of candidate comments. The article consists of a title INLINEFORM0 and a body INLINEFORM1 . The comment pool is formed from a large scale of candidate comments INLINEFORM2 , where INLINEFORM3 is the number of the unique comments in the pool. In this work, we have 4.5 million human comments in the candidate set, and the comments are various, covering different topics from pets to sports.",
"The retrieval-based model should score the matching between the upcoming article and each comments, and return the comments which is matched with the articles the most. Therefore, there are two main challenges in retrieval-based commenting. One is how to evaluate the matching of the articles and comments. The other is how to efficiently compute the matching scores because the number of comments in the pool is large.",
"To address both problems, we select the “dot-product” operation to compute matching scores. More specifically, the model first computes the representations of the article INLINEFORM0 and the comments INLINEFORM1 . Then the score between article INLINEFORM2 and comment INLINEFORM3 is computed with the “dot-product” operation: DISPLAYFORM0 ",
"The dot-product scoring method has proven a successful in a matching model BIBREF1 . The problem of finding datapoints with the largest dot-product values is called Maximum Inner Product Search (MIPS), and there are lots of solutions to improve the efficiency of solving this problem. Therefore, even when the number of candidate comments is very large, the model can still find comments with the highest efficiency. However, the study of the MIPS is out of the discussion in this work. We refer the readers to relevant articles for more details about the MIPS BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Another advantage of the dot-product scoring method is that it does not require any extra parameters, so it is more suitable as a part of the unsupervised model."
],
[
"We obtain the representations of articles INLINEFORM0 and comments INLINEFORM1 with a neural variational topic model. The neural variational topic model is based on the variational autoencoder framework, so it can be trained in an unsupervised manner. The model encodes the source text into a representation, from which it reconstructs the text.",
"We concatenate the title and the body to represent the article. In our model, the representations of the article and the comment are obtained in the same way. For simplicity, we denote both the article and the comment as “document”. Since the articles are often very long (more than 200 words), we represent the documents into bag-of-words, for saving both the time and memory cost. We denote the bag-of-words representation as INLINEFORM0 , where INLINEFORM1 is the one-hot representation of the word at INLINEFORM2 position, and INLINEFORM3 is the number of words in the vocabulary. The encoder INLINEFORM4 compresses the bag-of-words representations INLINEFORM5 into topic representations INLINEFORM6 : DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 are the trainable parameters. Then the decoder INLINEFORM4 reconstructs the documents by independently generating each words in the bag-of-words: DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 is the number of words in the bag-of-words, and INLINEFORM1 is a trainable matrix to map the topic representation into the word distribution.",
"In order to model the topic information, we use a Dirichlet prior rather than the standard Gaussian prior. However, it is difficult to develop an effective reparameterization function for the Dirichlet prior to train VAE. Therefore, following BIBREF6 , we use the Laplace approximation BIBREF7 to Dirichlet prior INLINEFORM0 : DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 denotes the logistic normal distribution, INLINEFORM1 is the number of topics, and INLINEFORM2 is a parameter vector. Then, the variational lower bound is written as: DISPLAYFORM0 ",
"where the first term is the KL-divergence loss and the second term is the reconstruction loss. The mean INLINEFORM0 and the variance INLINEFORM1 are computed as follows: DISPLAYFORM0 DISPLAYFORM1 ",
"We use the INLINEFORM0 and INLINEFORM1 to generate the samples INLINEFORM2 by sampling INLINEFORM3 , from which we reconstruct the input INLINEFORM4 .",
"At the training stage, we train the neural variational topic model with the Eq. EQREF22 . At the testing stage, we use INLINEFORM0 to compute the topic representations of the article INLINEFORM1 and the comment INLINEFORM2 ."
],
[
"In addition to the unsupervised training, we explore a semi-supervised training framework to combine the proposed unsupervised model and the supervised model. In this scenario we have a paired dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1 . The supervised model is trained on INLINEFORM2 so that we can learn the matching or mapping between articles and comments. By sharing the encoder of the supervised model and the unsupervised model, we can jointly train both the models with a joint objective function: DISPLAYFORM0 ",
"where INLINEFORM0 is the loss function of the unsupervised learning (Eq. refloss), INLINEFORM1 is the loss function of the supervised learning (e.g. the cross-entropy loss of Seq2Seq model), and INLINEFORM2 is a hyper-parameter to balance two parts of the loss function. Hence, the model is trained on both unpaired data INLINEFORM3 , and paired data INLINEFORM4 ."
],
[
"We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words."
],
[
"The hidden size of the model is 512, and the batch size is 64. The number of topics INLINEFORM0 is 100. The weight INLINEFORM1 in Eq. EQREF26 is 1.0 under the semi-supervised setting. We prune the vocabulary, and only leave 30,000 most frequent words in the vocabulary. We train the model for 20 epochs with the Adam optimizing algorithms BIBREF8 . In order to alleviate the KL vanishing problem, we set the initial learning to INLINEFORM2 , and use batch normalization BIBREF9 in each layer. We also gradually increase the KL term from 0 to 1 after each epoch."
],
[
"We compare our model with several unsupervised models and supervised models.",
"Unsupervised baseline models are as follows:",
"TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline. We use the concatenation of the title and the body as the query to retrieve the candidate comment set by means of the similarity of the tf-idf value. The model is trained on unpaired articles and comments, which is the same as our proposed model.",
"LDA (Topic, Non-Neural) is a popular unsupervised topic model, which discovers the abstract \"topics\" that occur in a collection of documents. We train the LDA with the articles and comments in the training set. The model retrieves the comments by the similarity of the topic representations.",
"NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.",
"The supervised baseline models are:",
"S2S (Generative) BIBREF11 is a supervised generative model based on the sequence-to-sequence network with the attention mechanism BIBREF12 . The model uses the titles and the bodies of the articles as the encoder input, and generates the comments with the decoder.",
"IR (Retrieval) BIBREF0 is a supervised retrieval-based model, which trains a convolutional neural network (CNN) to take the articles and a comment as inputs, and output the relevance score. The positive instances for training are the pairs in the training set, and the negative instances are randomly sampled using the negative sampling technique BIBREF13 ."
],
[
"For text generation, automatically evaluate the quality of the generated text is an open problem. In particular, the comment of a piece of news can be various, so it is intractable to find out all the possible references to be compared with the model outputs. Inspired by the evaluation methods of dialogue models, we formulate the evaluation as a ranking problem. Given a piece of news and a set of candidate comments, the comment model should return the rank of the candidate comments. The candidate comment set consists of the following parts:",
"Correct: The ground-truth comments of the corresponding news provided by the human.",
"Plausible: The 50 most similar comments to the news. We use the news as the query to retrieve the comments that appear in the training set based on the cosine similarity of their tf-idf values. We select the top 50 comments that are not the correct comments as the plausible comments.",
"Popular: The 50 most popular comments from the dataset. We count the frequency of each comments in the training set, and select the 50 most frequent comments to form the popular comment set. The popular comments are the general and meaningless comments, such as “Yes”, “Great”, “That's right', and “Make Sense”. These comments are dull and do not carry any information, so they are regarded as incorrect comments.",
"Random: After selecting the correct, plausible, and popular comments, we fill the candidate set with randomly selected comments from the training set so that there are 200 unique comments in the candidate set.",
"Following previous work, we measure the rank in terms of the following metrics:",
"Recall@k: The proportion of human comments found in the top-k recommendations.",
"Mean Rank (MR): The mean rank of the human comments.",
"Mean Reciprocal Rank (MRR): The mean reciprocal rank of the human comments.",
"The evaluation protocol is compatible with both retrieval models and generative models. The retrieval model can directly rank the comments by assigning a score for each comment, while the generative model can rank the candidates by the model's log-likelihood score.",
"Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. The neural variational document model is based on the neural VAE framework. It can capture the semantic information, so it has better performance than the TF-IDF model. LDA models the topic information, and captures the deeper relationship between the article and comments, so it achieves improvement in all relevance metrics. Finally, our proposed model outperforms all these unsupervised methods, mainly because the proposed model learns both the semantics and the topic information.",
"We also evaluate two popular supervised models, i.e. seq2seq and IR. Since the articles are very long, we find either RNN-based or CNN-based encoders cannot hold all the words in the articles, so it requires limiting the length of the input articles. Therefore, we use an MLP-based encoder, which is the same as our model, to encode the full length of articles. In our preliminary experiments, the MLP-based encoder with full length articles achieves better scores than the RNN/CNN-based encoder with limited length articles. It shows that the seq2seq model gets low scores on all relevant metrics, mainly because of the mode collapse problem as described in Section Challenges. Unlike seq2seq, IR is based on a retrieval framework, so it achieves much better performance."
],
[
"Following previous work BIBREF0 , we evaluate the models under the generative evaluation setting. The retrieval-based models generate the comments by selecting a comment from the candidate set. The candidate set contains the comments in the training set. Unlike the retrieval evaluation, the reference comments may not appear in the candidate set, which is closer to real-world settings. Generative-based models directly generate comments without a candidate set. We compare the generated comments of either the retrieval-based models or the generative models with the five reference comments. We select four popular metrics in text generation to compare the model outputs with the references: BLEU BIBREF14 , METEOR BIBREF15 , ROUGE BIBREF16 , CIDEr BIBREF17 .",
"Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation. Still, the supervised IR achieves better scores than the seq2seq model. With the help of our proposed model, both IR and S2S achieve an improvement under the semi-supervised scenarios."
],
[
"We analyze the performance of the proposed method under the semi-supervised setting. We train the supervised IR model with different numbers of paired data. Figure FIGREF39 shows the curve (blue) of the recall1 score. As expected, the performance grows as the paired dataset becomes larger. We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M). It shows that IR+Proposed can outperform the supervised IR model given the same paired dataset. It concludes that the proposed model can exploit the unpaired data to further improve the performance of the supervised model.",
"Although our proposed model can achieve better performance than previous models, there are still remaining two questions: why our model can outperform them, and how to further improve the performance. To address these queries, we perform error analysis to analyze the error types of our model and the baseline models. We select TF-IDF, S2S, and IR as the representative baseline models. We provide 200 unique comments as the candidate sets, which consists of four types of comments as described in the above retrieval evaluation setting: Correct, Plausible, Popular, and Random. We rank the candidate comment set with four models (TF-IDF, S2S, IR, and Proposed+IR), and record the types of top-1 comments.",
"Figure FIGREF40 shows the percentage of different types of top-1 comments generated by each model. It shows that TF-IDF prefers to rank the plausible comments as the top-1 comments, mainly because it matches articles with the comments based on the similarity of the lexicon. Therefore, the plausible comments, which are more similar in the lexicon, are more likely to achieve higher scores than the correct comments. It also shows that the S2S model is more likely to rank popular comments as the top-1 comments. The reason is the S2S model suffers from the mode collapse problem and data sparsity, so it prefers short and general comments like “Great” or “That's right”, which appear frequently in the training set. The correct comments often contain new information and different language models from the training set, so they do not obtain a high score from S2S.",
"IR achieves better performance than TF-IDF and S2S. However, it still suffers from the discrimination between the plausible comments and correct comments. This is mainly because IR does not explicitly model the underlying topics. Therefore, the correct comments which are more relevant in topic with the articles get lower scores than the plausible comments which are more literally relevant with the articles. With the help of our proposed model, proposed+IR achieves the best performance, and achieves a better accuracy to discriminate the plausible comments and the correct comments. Our proposed model incorporates the topic information, so the correct comments which are more similar to the articles in topic obtain higher scores than the other types of comments. According to the analysis of the error types of our model, we still need to focus on avoiding predicting the plausible comments."
],
[
"There are few studies regarding machine commenting. Qin et al. QinEA2018 is the first to propose the article commenting task and a dataset, which is used to evaluate our model in this work. More studies about the comments aim to automatically evaluate the quality of the comments. Park et al. ParkSDE16 propose a system called CommentIQ, which assist the comment moderators in identifying high quality comments. Napoles et al. NapolesTPRP17 propose to discriminating engaging, respectful, and informative conversations. They present a Yahoo news comment threads dataset and annotation scheme for the new task of identifying “good” online conversations. More recently, Kolhaatkar and Taboada KolhatkarT17 propose a model to classify the comments into constructive comments and non-constructive comments. In this work, we are also inspired by the recent related work of natural language generation models BIBREF18 , BIBREF19 ."
],
[
"Topic models BIBREF20 are among the most widely used models for learning unsupervised representations of text. One of the most popular approaches for modeling the topics of the documents is the Latent Dirichlet Allocation BIBREF21 , which assumes a discrete mixture distribution over topics is sampled from a Dirichlet prior shared by all documents. In order to explore the space of different modeling assumptions, some black-box inference methods BIBREF22 , BIBREF23 are proposed and applied to the topic models.",
"Kingma and Welling vae propose the Variational Auto-Encoder (VAE) where the generative model and the variational posterior are based on neural networks. VAE has recently been applied to modeling the representation and the topic of the documents. Miao et al. NVDM model the representation of the document with a VAE-based approach called the Neural Variational Document Model (NVDM). However, the representation of NVDM is a vector generated from a Gaussian distribution, so it is not very interpretable unlike the multinomial mixture in the standard LDA model. To address this issue, Srivastava and Sutton nvlda propose the NVLDA model that replaces the Gaussian prior with the Logistic Normal distribution to approximate the Dirichlet prior and bring the document vector into the multinomial space. More recently, Nallapati et al. sengen present a variational auto-encoder approach which models the posterior over the topic assignments to sentences using an RNN."
],
[
"We explore a novel way to train a machine commenting model in an unsupervised manner. According to the properties of the task, we propose using the topics to bridge the semantic gap between articles and comments. We introduce a variation topic model to represent the topics, and match the articles and comments by the similarity of their topics. Experiments show that our topic-based approach significantly outperforms previous lexicon-based models. The model can also profit from paired corpora and achieves state-of-the-art performance under semi-supervised scenarios."
]
]
} | {
"question": [
"Which paired corpora did they use in the other experiment?",
"By how much does their system outperform the lexicon-based models?",
"Which lexicon-based models did they compare with?",
"How many comments were used?",
"How many articles did they have?",
"What news comment dataset was used?"
],
"question_id": [
"100cf8b72d46da39fedfe77ec939fb44f25de77f",
"8cc56fc44136498471754186cfa04056017b4e54",
"5fa431b14732b3c47ab6eec373f51f2bca04f614",
"33ccbc401b224a48fba4b167e86019ffad1787fb",
"cca74448ab0c518edd5fc53454affd67ac1a201c",
"b69ffec1c607bfe5aa4d39254e0770a3433a191b"
],
"nlp_background": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In addition to the unsupervised training, we explore a semi-supervised training framework to combine the proposed unsupervised model and the supervised model. In this scenario we have a paired dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1 . The supervised model is trained on INLINEFORM2 so that we can learn the matching or mapping between articles and comments. By sharing the encoder of the supervised model and the unsupervised model, we can jointly train both the models with a joint objective function: DISPLAYFORM0"
],
"highlighted_evidence": [
" In this scenario we have a paired dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1 . The supervised model is trained on INLINEFORM2 so that we can learn the matching or mapping between articles and comments. By sharing the encoder of the supervised model and the unsupervised model, we can jointly train both the models with a joint objective function: DISPLAYFORM0"
]
},
{
"unanswerable": false,
"extractive_spans": [
"Chinese dataset BIBREF0"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words.",
"We analyze the performance of the proposed method under the semi-supervised setting. We train the supervised IR model with different numbers of paired data. Figure FIGREF39 shows the curve (blue) of the recall1 score. As expected, the performance grows as the paired dataset becomes larger. We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M). It shows that IR+Proposed can outperform the supervised IR model given the same paired dataset. It concludes that the proposed model can exploit the unpaired data to further improve the performance of the supervised model."
],
"highlighted_evidence": [
"We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model.",
"We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M)."
]
}
],
"annotation_id": [
"4cab4c27ed7f23d35b539bb3b1c7380ef603afe7",
"a951e1f37364826ddf170c9076b0d647f29db95a"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Under the retrieval evaluation setting, their proposed model + IR2 had better MRR than NVDM by 0.3769, better MR by 4.6, and better Recall@10 by 20 . \nUnder the generative evaluation setting the proposed model + IR2 had better BLEU by 0.044 , better CIDEr by 0.033, better ROUGE by 0.032, and better METEOR by 0.029",
"evidence": [
"NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.",
"Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. The neural variational document model is based on the neural VAE framework. It can capture the semantic information, so it has better performance than the TF-IDF model. LDA models the topic information, and captures the deeper relationship between the article and comments, so it achieves improvement in all relevance metrics. Finally, our proposed model outperforms all these unsupervised methods, mainly because the proposed model learns both the semantics and the topic information.",
"FLOAT SELECTED: Table 2: The performance of the unsupervised models and supervised models under the retrieval evaluation settings. (Recall@k, MRR: higher is better; MR: lower is better.)",
"Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation. Still, the supervised IR achieves better scores than the seq2seq model. With the help of our proposed model, both IR and S2S achieve an improvement under the semi-supervised scenarios.",
"FLOAT SELECTED: Table 3: The performance of the unsupervised models and supervised models under the generative evaluation settings. (METEOR, ROUGE, CIDEr, BLEU: higher is better.)"
],
"highlighted_evidence": [
"NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.",
"Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. ",
"FLOAT SELECTED: Table 2: The performance of the unsupervised models and supervised models under the retrieval evaluation settings. (Recall@k, MRR: higher is better; MR: lower is better.)",
"Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation.",
"FLOAT SELECTED: Table 3: The performance of the unsupervised models and supervised models under the generative evaluation settings. (METEOR, ROUGE, CIDEr, BLEU: higher is better.)"
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Proposed model is better than both lexical based models by significan margin in all metrics: BLEU 0.261 vs 0.250, ROUGLE 0.162 vs 0.155 etc.",
"evidence": [
"TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline. We use the concatenation of the title and the body as the query to retrieve the candidate comment set by means of the similarity of the tf-idf value. The model is trained on unpaired articles and comments, which is the same as our proposed model.",
"NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.",
"Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with other popular unsupervised methods, including TF-IDF, LDA, and NVDM. TF-IDF retrieves the comments by similarity of words rather than the semantic meaning, so it achieves low scores on all the retrieval metrics. The neural variational document model is based on the neural VAE framework. It can capture the semantic information, so it has better performance than the TF-IDF model. LDA models the topic information, and captures the deeper relationship between the article and comments, so it achieves improvement in all relevance metrics. Finally, our proposed model outperforms all these unsupervised methods, mainly because the proposed model learns both the semantics and the topic information.",
"Table TABREF32 shows the performance for our models and the baselines in generative evaluation. Similar to the retrieval evaluation, our proposed model outperforms the other unsupervised methods, which are TF-IDF, NVDM, and LDA, in generative evaluation. Still, the supervised IR achieves better scores than the seq2seq model. With the help of our proposed model, both IR and S2S achieve an improvement under the semi-supervised scenarios.",
"FLOAT SELECTED: Table 2: The performance of the unsupervised models and supervised models under the retrieval evaluation settings. (Recall@k, MRR: higher is better; MR: lower is better.)",
"FLOAT SELECTED: Table 3: The performance of the unsupervised models and supervised models under the generative evaluation settings. (METEOR, ROUGE, CIDEr, BLEU: higher is better.)"
],
"highlighted_evidence": [
"TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline.",
"NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.",
"Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation.",
"Table TABREF32 shows the performance for our models and the baselines in generative evaluation.",
"FLOAT SELECTED: Table 2: The performance of the unsupervised models and supervised models under the retrieval evaluation settings. (Recall@k, MRR: higher is better; MR: lower is better.)",
"FLOAT SELECTED: Table 3: The performance of the unsupervised models and supervised models under the generative evaluation settings. (METEOR, ROUGE, CIDEr, BLEU: higher is better.)"
]
}
],
"annotation_id": [
"2d08e056385b01322aee0901a9b84cfc9a888ee1",
"a103500a032c68c4c921e371020286f6642f2eb5"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"TF-IDF",
"NVDM"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline. We use the concatenation of the title and the body as the query to retrieve the candidate comment set by means of the similarity of the tf-idf value. The model is trained on unpaired articles and comments, which is the same as our proposed model.",
"NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic."
],
"highlighted_evidence": [
"TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline.",
"NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 ."
]
}
],
"annotation_id": [
"5244e8c8bd4b0b37950dfc4396147d6107ea361f"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"from 50K to 4.8M"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We analyze the performance of the proposed method under the semi-supervised setting. We train the supervised IR model with different numbers of paired data. Figure FIGREF39 shows the curve (blue) of the recall1 score. As expected, the performance grows as the paired dataset becomes larger. We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M). It shows that IR+Proposed can outperform the supervised IR model given the same paired dataset. It concludes that the proposed model can exploit the unpaired data to further improve the performance of the supervised model."
],
"highlighted_evidence": [
"We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M)."
]
}
],
"annotation_id": [
"3b43bfea62e231d06768f9eb11ddfbfb0d8973a5"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"198,112"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words."
],
"highlighted_evidence": [
"The dataset consists of 198,112 news articles."
]
}
],
"annotation_id": [
"c16bd2e6d7fedcc710352b168120d7b82f78d55a"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Chinese dataset BIBREF0"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words."
],
"highlighted_evidence": [
"We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments."
]
}
],
"annotation_id": [
"bd7c9ed29ee02953c27630de0beee67f7b23eba0"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Table 2: The performance of the unsupervised models and supervised models under the retrieval evaluation settings. (Recall@k, MRR: higher is better; MR: lower is better.)",
"Table 3: The performance of the unsupervised models and supervised models under the generative evaluation settings. (METEOR, ROUGE, CIDEr, BLEU: higher is better.)",
"Figure 1: The performance of the supervised model and the semi-supervised model trained on different paired data size.",
"Figure 2: Error types of comments generated by different models."
],
"file": [
"5-Table2-1.png",
"5-Table3-1.png",
"6-Figure1-1.png",
"6-Figure2-1.png"
]
} |
1909.08402 | Enriching BERT with Knowledge Graph Embeddings for Document Classification | In this paper, we focus on the classification of books using short descriptive texts (cover blurbs) and additional metadata. Building upon BERT, a deep neural language model, we demonstrate how to combine text representations with metadata and knowledge graph embeddings, which encode author information. Compared to the standard BERT approach we achieve considerably better results for the classification task. For a more coarse-grained classification using eight labels we achieve an F1- score of 87.20, while a detailed classification using 343 labels yields an F1-score of 64.70. We make the source code and trained models of our experiments publicly available | {
"section_name": [
"Introduction",
"Related Work",
"Dataset and Task",
"Experiments",
"Experiments ::: Metadata Features",
"Experiments ::: Author Embeddings",
"Experiments ::: Pre-trained German Language Model",
"Experiments ::: Model Architecture",
"Experiments ::: Implementation",
"Experiments ::: Baseline",
"Results",
"Discussion",
"Conclusions and Future Work",
"Acknowledgments"
],
"paragraphs": [
[
"With ever-increasing amounts of data available, there is an increase in the need to offer tooling to speed up processing, and eventually making sense of this data. Because fully-automated tools to extract meaning from any given input to any desired level of detail have yet to be developed, this task is still at least supervised, and often (partially) resolved by humans; we refer to these humans as knowledge workers. Knowledge workers are professionals that have to go through large amounts of data and consolidate, prepare and process it on a daily basis. This data can originate from highly diverse portals and resources and depending on type or category, the data needs to be channelled through specific down-stream processing pipelines. We aim to create a platform for curation technologies that can deal with such data from diverse sources and that provides natural language processing (NLP) pipelines tailored to particular content types and genres, rendering this initial classification an important sub-task.",
"In this paper, we work with the dataset of the 2019 GermEval shared task on hierarchical text classification BIBREF0 and use the predefined set of labels to evaluate our approach to this classification task.",
"Deep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT; BIBREF1) outperformed previous state-of-the-art methods by a large margin on various NLP tasks. We adopt BERT for text-based classification and extend the model with additional metadata provided in the context of the shared task, such as author, publisher, publishing date, etc.",
"A key contribution of this paper is the inclusion of additional (meta) data using a state-of-the-art approach for text processing. Being a transfer learning approach, it facilitates the task solution with external knowledge for a setup in which relatively little training data is available. More precisely, we enrich BERT, as our pre-trained text representation model, with knowledge graph embeddings that are based on Wikidata BIBREF2, add metadata provided by the shared task organisers (title, author(s), publishing date, etc.) and collect additional information on authors for this particular document classification task. As we do not rely on text-based features alone but also utilize document metadata, we consider this as a document classification problem. The proposed approach is an attempt to solve this problem exemplary for single dataset provided by the organisers of the shared task."
],
[
"A central challenge in work on genre classification is the definition of a both rigid (for theoretical purposes) and flexible (for practical purposes) mode of representation that is able to model various dimensions and characteristics of arbitrary text genres. The size of the challenge can be illustrated by the observation that there is no clear agreement among researchers regarding actual genre labels or their scope and consistency. There is a substantial amount of previous work on the definition of genre taxonomies, genre ontologies, or sets of labels BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. Since we work with the dataset provided by the organisers of the 2019 GermEval shared task, we adopt their hierarchy of labels as our genre palette. In the following, we focus on related work more relevant to our contribution.",
"With regard to text and document classification, BERT (Bidirectional Encoder Representations from Transformers) BIBREF1 is a pre-trained embedding model that yields state of the art results in a wide span of NLP tasks, such as question answering, textual entailment and natural language inference learning BIBREF8. BIBREF9 are among the first to apply BERT to document classification. Acknowledging challenges like incorporating syntactic information, or predicting multiple labels, they describe how they adapt BERT for the document classification task. In general, they introduce a fully-connected layer over the final hidden state that contains one neuron each representing an input token, and further optimize the model choosing soft-max classifier parameters to weight the hidden state layer. They report state of the art results in experiments based on four popular datasets. An approach exploiting Hierarchical Attention Networks is presented by BIBREF10. Their model introduces a hierarchical structure to represent the hierarchical nature of a document. BIBREF10 derive attention on the word and sentence level, which makes the attention mechanisms react flexibly to long and short distant context information during the building of the document representations. They test their approach on six large scale text classification problems and outperform previous methods substantially by increasing accuracy by about 3 to 4 percentage points. BIBREF11 (the organisers of the GermEval 2019 shared task on hierarchical text classification) use shallow capsule networks, reporting that these work well on structured data for example in the field of visual inference, and outperform CNNs, LSTMs and SVMs in this area. They use the Web of Science (WOS) dataset and introduce a new real-world scenario dataset called Blurb Genre Collection (BGC).",
"With regard to external resources to enrich the classification task, BIBREF12 experiment with external knowledge graphs to enrich embedding information in order to ultimately improve language understanding. They use structural knowledge represented by Wikidata entities and their relation to each other. A mix of large-scale textual corpora and knowledge graphs is used to further train language representation exploiting ERNIE BIBREF13, considering lexical, syntactic, and structural information. BIBREF14 propose and evaluate an approach to improve text classification with knowledge from Wikipedia. Based on a bag of words approach, they derive a thesaurus of concepts from Wikipedia and use it for document expansion. The resulting document representation improves the performance of an SVM classifier for predicting text categories."
],
[
"Our experiments are modelled on the GermEval 2019 shared task and deal with the classification of books. The dataset contains 20,784 German books. Each record has:",
"A title.",
"A list of authors. The average number of authors per book is 1.13, with most books (14,970) having a single author and one outlier with 28 authors.",
"A short descriptive text (blurb) with an average length of 95 words.",
"A URL pointing to a page on the publisher's website.",
"An ISBN number.",
"The date of publication.",
"The books are labeled according to the hierarchy used by the German publisher Random House. This taxonomy includes a mix of genre and topical categories. It has eight top-level genre categories, 93 on the second level and 242 on the most detailed third level. The eight top-level labels are `Ganzheitliches Bewusstsein' (holistic awareness/consciousness), `Künste' (arts), `Sachbuch' (non-fiction), `Kinderbuch & Jugendbuch' (children and young adults), `Ratgeber' (counselor/advisor), `Literatur & Unterhaltung' (literature and entertainment), `Glaube & Ethik' (faith and ethics), `Architektur & Garten' (architecture and garden). We refer to the shared task description for details on the lower levels of the ontology.",
"Note that we do not have access to any of the full texts. Hence, we use the blurbs as input for BERT. Given the relatively short average length of the blurbs, this considerably decreases the amount of data points available for a single book.",
"The shared task is divided into two sub-task. Sub-task A is to classify a book, using the information provided as explained above, according to the top-level of the taxonomy, selecting one or more of the eight labels. Sub-task B is to classify a book according to the detailed taxonomy, specifying labels on the second and third level of the taxonomy as well (in total 343 labels). This renders both sub-tasks a multi-label classification task."
],
[
"As indicated in Section SECREF1, we base our experiments on BERT in order to explore if it can be successfully adopted to the task of book or document classification. We use the pre-trained models and enrich them with additional metadata and tune the models for both classification sub-tasks."
],
[
"In addition to the metadata provided by the organisers of the shared task (see Section SECREF3), we add the following features.",
"Number of authors.",
"Academic title (Dr. or Prof.), if found in author names (0 or 1).",
"Number of words in title.",
"Number of words in blurb.",
"Length of longest word in blurb.",
"Mean word length in blurb.",
"Median word length in blurb.",
"Age in years after publication date.",
"Probability of first author being male or female based on the Gender-by-Name dataset. Available for 87% of books in training set (see Table TABREF21).",
"The statistics (length, average, etc.) regarding blurbs and titles are added in an attempt to make certain characteristics explicit to the classifier. For example, books labeled `Kinderbuch & Jugendbuch' (children and young adults) have a title that is on average 5.47 words long, whereas books labeled `Künste' (arts) on average have shorter titles of 3.46 words. The binary feature for academic title is based on the assumption that academics are more likely to write non-fiction. The gender feature is included to explore (and potentially exploit) whether or not there is a gender-bias for particular genres."
],
[
"Whereas one should not judge a book by its cover, we argue that additional information on the author can support the classification task. Authors often adhere to their specific style of writing and are likely to specialize in a specific genre.",
"To be precise, we want to include author identity information, which can be retrieved by selecting particular properties from, for example, the Wikidata knowledge graph (such as date of birth, nationality, or other biographical features). A drawback of this approach, however, is that one has to manually select and filter those properties that improve classification performance. This is why, instead, we follow a more generic approach and utilize automatically generated graph embeddings as author representations.",
"Graph embedding methods create dense vector representations for each node such that distances between these vectors predict the occurrence of edges in the graph. The node distance can be interpreted as topical similarity between the corresponding authors.",
"We rely on pre-trained embeddings based on PyTorch BigGraph BIBREF15. The graph model is trained on the full Wikidata graph, using a translation operator to represent relations. Figure FIGREF23 visualizes the locality of the author embeddings.",
"To derive the author embeddings, we look up Wikipedia articles that match with the author names and map the articles to the corresponding Wikidata items. If a book has multiple authors, the embedding of the first author for which an embedding is available is used. Following this method, we are able to retrieve embeddings for 72% of the books in the training and test set (see Table TABREF21)."
],
[
"Although the pre-trained BERT language models are multilingual and, therefore, support German, we rely on a BERT model that was exclusively pre-trained on German text, as published by the German company Deepset AI. This model was trained from scratch on the German Wikipedia, news articles and court decisions. Deepset AI reports better performance for the German BERT models compared to the multilingual models on previous German shared tasks (GermEval2018-Fine and GermEval 2014)."
],
[
"Our neural network architecture, shown in Figure FIGREF31, resembles the original BERT model BIBREF1 and combines text- and non-text features with a multilayer perceptron (MLP).",
"The BERT architecture uses 12 hidden layers, each layer consists of 768 units. To derive contextualized representations from textual features, the book title and blurb are concatenated and then fed through BERT. To minimize the GPU memory consumption, we limit the input length to 300 tokens (which is shorter than BERT's hard-coded limit of 512 tokens). Only 0.25% of blurbs in the training set consist of more than 300 words, so this cut-off can be expected to have minor impact.",
"The non-text features are generated in a separate preprocessing step. The metadata features are represented as a ten-dimensional vector (two dimensions for gender, see Section SECREF10). Author embedding vectors have a length of 200 (see Section SECREF22). In the next step, all three representations are concatenated and passed into a MLP with two layers, 1024 units each and ReLu activation function. During training, the MLP is supposed to learn a non-linear combination of its input representations. Finally, the output layer does the actual classification. In the SoftMax output layer each unit corresponds to a class label. For sub-task A the output dimension is eight. We treat sub-task B as a standard multi-label classification problem, i. e., we neglect any hierarchical information. Accordingly, the output layer for sub-task B has 343 units. When the value of an output unit is above a given threshold the corresponding label is predicted, whereby thresholds are defined separately for each class. The optimum was found by varying the threshold in steps of $0.1$ in the interval from 0 to 1."
],
[
"Training is performed with batch size $b=16$, dropout probability $d=0.1$, learning rate $\\eta =2^{-5}$ (Adam optimizer) and 5 training epochs. These hyperparameters are the ones proposed by BIBREF1 for BERT fine-tuning. We did not experiment with hyperparameter tuning ourselves except for optimizing the classification threshold for each class separately. All experiments are run on a GeForce GTX 1080 Ti (11 GB), whereby a single training epoch takes up to 10min. If there is no single label for which prediction probability is above the classification threshold, the most popular label (Literatur & Unterhaltung) is used as prediction."
],
[
"To compare against a relatively simple baseline, we implemented a Logistic Regression classifier chain from scikit-learn BIBREF16. This baseline uses the text only and converts it to TF-IDF vectors. As with the BERT model, it performs 8-class multi-label classification for sub-task A and 343-class multi-label classification for sub-task B, ignoring the hierarchical aspect in the labels."
],
[
"Table TABREF34 shows the results of our experiments. As prescribed by the shared task, the essential evaluation metric is the micro-averaged F1-score. All scores reported in this paper are obtained using models that are trained on the training set and evaluated on the validation set. For the final submission to the shared task competition, the best-scoring setup is used and trained on the training and validation sets combined.",
"We are able to demonstrate that incorporating metadata features and author embeddings leads to better results for both sub-tasks. With an F1-score of 87.20 for task A and 64.70 for task B, the setup using BERT-German with metadata features and author embeddings (1) outperforms all other setups. Looking at the precision score only, BERT-German with metadata features (2) but without author embeddings performs best.",
"In comparison to the baseline (7), our evaluation shows that deep transformer models like BERT considerably outperform the classical TF-IDF approach, also when the input is the same (using the title and blurb only). BERT-German (4) and BERT-Multilingual (5) are only using text-based features (title and blurb), whereby the text representations of the BERT-layers are directly fed into the classification layer.",
"To establish the information gain of author embeddings, we train a linear classifier on author embeddings, using this as the only feature. The author-only model (6) is exclusively evaluated on books for which author embeddings are available, so the numbers are based on a slightly smaller validation set. With an F1-score of 61.99 and 32.13 for sub-tasks A and B, respectively, the author model yields the worst result. However, the information contained in the author embeddings help improve performance, as the results of the best-performing setup show. When evaluating the best model (1) only on books for that author embeddings are available, we find a further improvement with respect to F1 score (task A: from 87.20 to 87.81; task B: 64.70 to 65.74)."
],
[
"The best performing setup uses BERT-German with metadata features and author embeddings. In this setup the most data is made available to the model, indicating that, perhaps not surprisingly, more data leads to better classification performance. We expect that having access to the actual text of the book will further increase performance. The average number of words per blurb is 95 and only 0.25% of books exceed our cut-off point of 300 words per blurb. In addition, the distribution of labeled books is imbalanced, i.e. for many classes only a single digit number of training instances exist (Fig. FIGREF38). Thus, this task can be considered a low resource scenario, where including related data (such as author embeddings and author identity features such as gender and academic title) or making certain characteristics more explicit (title and blurb length statistics) helps. Furthermore, it should be noted that the blurbs do not provide summary-like abstracts of the book, but instead act as teasers, intended to persuade the reader to buy the book.",
"As reflected by the recent popularity of deep transformer models, they considerably outperform the Logistic Regression baseline using TF-IDF representation of the blurbs. However, for the simpler sub-task A, the performance difference between the baseline model and the multilingual BERT model is only six points, while consuming only a fraction of BERT's computing resources. The BERT model trained for German (from scratch) outperforms the multilingual BERT model by under three points for sub-task A and over six points for sub-task B, confirming the findings reported by the creators of the BERT-German models for earlier GermEval shared tasks.",
"While generally on par for sub-task A, for sub-task B there is a relatively large discrepancy between precision and recall scores. In all setups, precision is considerably higher than recall. We expect this to be down to the fact that for some of the 343 labels in sub-task B, there are very few instances. This means that if the classifier predicts a certain label, it is likely to be correct (i. e., high precision), but for many instances having low-frequency labels, this low-frequency label is never predicted (i. e., low recall).",
"As mentioned in Section SECREF30, we neglect the hierarchical nature of the labels and flatten the hierarchy (with a depth of three levels) to a single set of 343 labels for sub-task B. We expect this to have negative impact on performance, because it allows a scenario in which, for a particular book, we predict a label from the first level and also a non-matching label from the second level of the hierarchy. The example Coenzym Q10 (Table TABREF36) demonstrates this issue. While the model correctly predicts the second level label Gesundheit & Ernährung (health & diet), it misses the corresponding first level label Ratgeber (advisor). Given the model's tendency to higher precision rather than recall in sub-task B, as a post-processing step we may want to take the most detailed label (on the third level of the hierarchy) to be correct and manually fix the higher level labels accordingly. We leave this for future work and note that we expect this to improve performance, but it is hard to say by how much. We hypothesize that an MLP with more and bigger layers could improve the classification performance. However, this would increase the number of parameters to be trained, and thus requires more training data (such as the book's text itself, or a summary of it)."
],
[
"In this paper we presented a way of enriching BERT with knowledge graph embeddings and additional metadata. Exploiting the linked knowledge that underlies Wikidata improves performance for our task of document classification. With this approach we improve the standard BERT models by up to four percentage points in accuracy. Furthermore, our results reveal that with task-specific information such as author names and publication metadata improves the classification task essentially compared a text-only approach. Especially, when metadata feature engineering is less trivial, adding additional task-specific information from an external knowledge source such as Wikidata can help significantly. The source code of our experiments and the trained models are publicly available.",
"Future work comprises the use of hierarchical information in a post-processing step to refine the classification. Another promising approach to tackle the low resource problem for task B would be to use label embeddings. Many labels are similar and semantically related. The relationships between labels can be utilized to model in a joint embedding space BIBREF17. However, a severe challenge with regard to setting up label embeddings is the quite heterogeneous category system that can often be found in use online. The Random House taxonomy (see above) includes category names, i. e., labels, that relate to several different dimensions including, among others, genre, topic and function.",
"This work is done in the context of a larger project that develops a platform for curation technologies. Under the umbrella of this project, the classification of pieces of incoming text content according to an ontology is an important step that allows the routing of this content to particular, specialized processing workflows, including parameterising the included pipelines. Depending on content type and genre, it may make sense to apply OCR post-processing (for digitized books from centuries ago), machine translation (for content in languages unknown to the user), information extraction, or other particular and specialized procedures. Constructing such a generic ontology for digital content is a challenging task, and classification performance is heavily dependent on input data (both in shape and amount) and on the nature of the ontology to be used (in the case of this paper, the one predefined by the shared task organisers). In the context of our project, we continue to work towards a maximally generic content ontology, and at the same time towards applied classification architectures such as the one presented in this paper."
],
[
"This research is funded by the German Federal Ministry of Education and Research (BMBF) through the “Unternehmen Region”, instrument “Wachstumskern” QURATOR (grant no. 03WKDA1A). We would like to thank the anonymous reviewers for comments on an earlier version of this manuscript."
]
]
} | {
"question": [
"By how much do they outperform standard BERT?",
"What dataset do they use?",
"How do they combine text representations with the knowledge graph embeddings?"
],
"question_id": [
"f5cf8738e8d211095bb89350ed05ee7f9997eb19",
"bed527bcb0dd5424e69563fba4ae7e6ea1fca26a",
"aeab5797b541850e692f11e79167928db80de1ea"
],
"nlp_background": [
"five",
"five",
"five"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"up to four percentage points in accuracy"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In this paper we presented a way of enriching BERT with knowledge graph embeddings and additional metadata. Exploiting the linked knowledge that underlies Wikidata improves performance for our task of document classification. With this approach we improve the standard BERT models by up to four percentage points in accuracy. Furthermore, our results reveal that with task-specific information such as author names and publication metadata improves the classification task essentially compared a text-only approach. Especially, when metadata feature engineering is less trivial, adding additional task-specific information from an external knowledge source such as Wikidata can help significantly. The source code of our experiments and the trained models are publicly available."
],
"highlighted_evidence": [
"With this approach we improve the standard BERT models by up to four percentage points in accuracy."
]
}
],
"annotation_id": [
"ee8ea37dfd50bb0935491cd004b9536f95ba753d"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"2019 GermEval shared task on hierarchical text classification"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In this paper, we work with the dataset of the 2019 GermEval shared task on hierarchical text classification BIBREF0 and use the predefined set of labels to evaluate our approach to this classification task."
],
"highlighted_evidence": [
"hierarchical",
"In this paper, we work with the dataset of the 2019 GermEval shared task on hierarchical text classification BIBREF0 and use the predefined set of labels to evaluate our approach to this classification task."
]
},
{
"unanswerable": false,
"extractive_spans": [
"GermEval 2019 shared task"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Our experiments are modelled on the GermEval 2019 shared task and deal with the classification of books. The dataset contains 20,784 German books. Each record has:"
],
"highlighted_evidence": [
"Our experiments are modelled on the GermEval 2019 shared task and deal with the classification of books. The dataset contains 20,784 German books."
]
}
],
"annotation_id": [
"2dc50cdf1bb37eb20d09c59088a64c982b188fcd",
"fe25cb2e8abe3f7f1a05a0a18a748eb67c061cd3"
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"all three representations are concatenated and passed into a MLP"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The BERT architecture uses 12 hidden layers, each layer consists of 768 units. To derive contextualized representations from textual features, the book title and blurb are concatenated and then fed through BERT. To minimize the GPU memory consumption, we limit the input length to 300 tokens (which is shorter than BERT's hard-coded limit of 512 tokens). Only 0.25% of blurbs in the training set consist of more than 300 words, so this cut-off can be expected to have minor impact.",
"The non-text features are generated in a separate preprocessing step. The metadata features are represented as a ten-dimensional vector (two dimensions for gender, see Section SECREF10). Author embedding vectors have a length of 200 (see Section SECREF22). In the next step, all three representations are concatenated and passed into a MLP with two layers, 1024 units each and ReLu activation function. During training, the MLP is supposed to learn a non-linear combination of its input representations. Finally, the output layer does the actual classification. In the SoftMax output layer each unit corresponds to a class label. For sub-task A the output dimension is eight. We treat sub-task B as a standard multi-label classification problem, i. e., we neglect any hierarchical information. Accordingly, the output layer for sub-task B has 343 units. When the value of an output unit is above a given threshold the corresponding label is predicted, whereby thresholds are defined separately for each class. The optimum was found by varying the threshold in steps of $0.1$ in the interval from 0 to 1."
],
"highlighted_evidence": [
"To derive contextualized representations from textual features, the book title and blurb are concatenated and then fed through BERT",
"The non-text features are generated in a separate preprocessing step. The metadata features are represented as a ten-dimensional vector (two dimensions for gender, see Section SECREF10). Author embedding vectors have a length of 200 (see Section SECREF22). In the next step, all three representations are concatenated and passed into a MLP with two layers, 1024 units each and ReLu activation function."
]
}
],
"annotation_id": [
"36d8b91931c9e5da3023449fdfa64596d510a4d2"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Table 1: Availability of additional data with respect to the dataset (relative numbers in parenthesis).",
"Figure 1: Visualization of Wikidata embeddings for Franz Kafka (3D-projection with PCA)5. Nearest neighbours in original 200D space: Arthur Schnitzler, E.T.A Hoffmann and Hans Christian Andersen.",
"Figure 2: Model architecture used in our experiments. Text-features are fed through BERT, concatenated with metadata and author embeddings and combined in a multilayer perceptron (MLP).",
"Table 2: Evaluation scores (micro avg.) on validation set with respect to the features used for classification. The model with BERT-German, metadata and author embeddings yields the highest F1-scores on both tasks.",
"Table 3: Book examples and their correct and predicted labels. Hierarchical label level is in parenthesis.",
"Figure 3: In sub-task B for many low-hierarchical labels only a small number of training samples exist, making it more difficult to predict the correct label."
],
"file": [
"3-Table1-1.png",
"4-Figure1-1.png",
"4-Figure2-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"7-Figure3-1.png"
]
} |
1909.11189 | Diachronic Topics in New High German Poetry | Statistical topic models are increasingly and popularly used by Digital Humanities scholars to perform distant reading tasks on literary data. It allows us to estimate what people talk about. Especially Latent Dirichlet Allocation (LDA) has shown its usefulness, as it is unsupervised, robust, easy to use, scalable, and it offers interpretable results. In a preliminary study, we apply LDA to a corpus of New High German poetry (textgrid, with 51k poems, 8m token), and use the distribution of topics over documents for a classification of poems into time periods and for authorship attribution. | {
"section_name": [
"Corpus",
"Experiments",
"Experiments ::: Topic Trends",
"Experiments ::: Classification of Time Periods and Authorship",
"Experiments ::: Conclusion & Future Work"
],
"paragraphs": [
[
"The Digital Library in the TextGrid Repository represents an extensive collection of German texts in digital form BIBREF3. It was mined from http://zeno.org and covers a time period from the mid 16th century up to the first decades of the 20th century. It contains many important texts that can be considered as part of the literary canon, even though it is far from complete (e.g. it contains only half of Rilke’s work). We find that around 51k texts are annotated with the label ’verse’ (TGRID-V), not distinguishing between ’lyric verse’ and ’epic verse’. However, the average length of these texts is around 150 token, dismissing most epic verse tales. Also, the poems are distributed over 229 authors, where the average author contributed 240 poems (median 131 poems). A drawback of TGRID-V is the circumstance that it contains a noticeable amount of French, Dutch and Latin (over 400 texts). To constrain our dataset to German, we filter foreign language material with a stopword list, as training a dedicated language identification classifier is far beyond the scope of this work."
],
[
"We approach diachronic variation of poetry from two perspectives. First, as distant reading task to visualize the development of clearly interpretable topics over time. Second, as a downstream task, i.e. supervised machine learning task to determine the year (the time-slot) of publication for a given poem. We infer topic distributions over documents as features and pit them against a simple style baseline.",
"We use the implementation of LDA as it is provided in genism BIBREF4. LDA assumes that a particular document contains a mixture of few salient topics, where words are semantically related. We transform our documents (of wordforms) to a bag of words representation, filter stopwords (function words), and set the desired number of topics=100 and train for 50 epochs to attain a reasonable distinctness of topics. We choose 100 topics (rather than a lower number that might be more straightforward to interpret) as we want to later use these topics as features for downstream tasks. We find that wordforms (instead of lemma) are more useful for poetry topic models, as these capture style features (rhyme), orthographic variations ('hertz' instead of 'herz'), and generally offer more interpretable results."
],
[
"We retrieve the most important (likely) words for all 100 topics and interpret these (sorted) word lists as aggregated topics, e.g. topic 27 (figure 2) contains: Tugend (virtue), Kunst (art), Ruhm (fame), Geist (spirit), Verstand (mind) and Lob (praise). This topic as a whole describes the concept of ’artistic virtue’.",
"In certain clusters (topics) we find poetic residuals, such that rhyme words often cluster together (as they stand in proximity), e.g. topic 52 with: Mund (mouth), Grund (cause, ground), rund (round).",
"To discover trends of topics over time, we bin our documents into time slots of 25 years width each. See figure 1 for a plot of the number of documents per bin. The chosen binning slots offer enough documents per slot for our experiments. To visualize trends of singular topics over time, we aggregate all documents d in slot s and add the probabilities of topic t given d and divide by the number of all d in s. This gives us the average probability of a topic per timeslot. We then plot the trajectories for each single topic. See figures 2–6 for a selection of interpretable topic trends. Please note that the scaling on the y-axis differ for each topic, as some topics are more pronounced in the whole dataset overall.",
"Some topic plots are already very revealing. The topic ‘artistic virtue’ (figure 2, left) shows a sharp peak around 1700—1750, outlining the period of Enlightenment. Several topics indicate Romanticism, such as ‘flowers’ (figure 2, right), ‘song’ (figure 3, left) or ‘dust, ghosts, depths’ (not shown). The period of 'Vormärz' or 'Young Germany' is quite clear with the topic ‘German Nation’ (figure 3, right). It is however hardly distinguishable from romantic topics.",
"We find that the topics 'Beautiful Girls' (figure 4, left) and 'Life & Death' (figure 4, right) are always quite present over time, while 'Girls' is more prounounced in Romanticism, and 'Death' in Barock.",
"We find that the topic 'Fire' (figure 5, left) is a fairly modern concept, that steadily rises into modernity, possibly because of the trope 'love is fire'. Next to it, the topic 'Family' (figure 5, right) shows wild fluctuation over time.",
"Finally, figure 6 shows topics that are most informative for the downstream classification task: Topic 11 'World, Power, Time' (left) is very clearly a Barock topic, ending at 1750, while topic 19 'Heaven, Depth, Silence' is a topic that rises from Romanticism into Modernity."
],
[
"To test whether topic models can be used for dating poetry or attributing authorship, we perform supervised classification experiments with Random Forest Ensemble classifiers. We find that we obtain better results by training and testing on stanzas instead of full poems, as we have more data available. Also, we use 50 year slots (instead of 25) to ease the task.",
"For each document we determine a class label for a time slot. The slot 1575–1624 receives the label 0, the slot 1625–1674 the label 1, etc.. In total, we have 7 classes (time slots).",
"As a baseline, we implement rather straightforward style features, such as line length, poem length (in token, syllables, lines), cadence (number of syllables of last word in line), soundscape (ratio of closed to open syllables, see BIBREF5), and a proxy for metre, the number of syllables of the first word in the line.",
"We split the data randomly 70:30 training:testing, where a 50:50 shows (5 points) worse performance. We then train Random Forest Ensemble classifiers and perform a grid search over their parameters to determine the best classifier. Please note that our class sizes are quite imbalanced.",
"The Style baseline achieves an Accuracy of 83%, LDA features 89% and a combination of the two gets 90%. However, training on full poems reduces this to 42—52%.",
"The most informative features (by information gain) are: Topic11 (.067), Topic 37 (.055), Syllables Per Line (.046), Length of poem in syllables (.031), Topic19 (.029), Topic98 (.025), Topic27 ('virtue') (.023), and Soundscape (.023).",
"For authorship attribution, we also use a 70:30 random train:test split and use the author name as class label. We only choose the most frequent 180 authors. We find that training on stanzas gives us 71% Accuracy, but when trained on full poems, we only get 13% Accuracy. It should be further investigated is this is only because of a surplus of data."
],
[
"We have shown the viability of Latent Dirichlet Allocation for a visualization of topic trends (the evolution of what people talk about in poetry). While most topics are easily interpretable and show a clear trend, others are quite noisy. For an exploratory experiment, the classification into time slots and for authors attribution is very promising, however far from perfect. It should be investigated whether using stanzas instead of whole poems only improves results because of more available data. Also, it needs to be determined if better topic models can deliver a better baseline for diachronic change in poetry, and if better style features will outperform semantics. Finally, only selecting clear trending and peaking topics (through co-variance) might further improve the results."
]
]
} | {
"question": [
"What is the algorithm used for the classification tasks?",
"Is the outcome of the LDA analysis evaluated in any way?",
"What is the corpus used in the study?"
],
"question_id": [
"bfa3776c30cb30e0088e185a5908e5172df79236",
"a2a66726a5dca53af58aafd8494c4de833a06f14",
"ee87608419e4807b9b566681631a8cd72197a71a"
],
"nlp_background": [
"two",
"two",
"two"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"German",
"German",
"German"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Random Forest Ensemble classifiers"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"To test whether topic models can be used for dating poetry or attributing authorship, we perform supervised classification experiments with Random Forest Ensemble classifiers. We find that we obtain better results by training and testing on stanzas instead of full poems, as we have more data available. Also, we use 50 year slots (instead of 25) to ease the task."
],
"highlighted_evidence": [
"To test whether topic models can be used for dating poetry or attributing authorship, we perform supervised classification experiments with Random Forest Ensemble classifiers. "
]
}
],
"annotation_id": [
"b19621401c5d97df4f64375d16bc639aa58c460e"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"The Style baseline achieves an Accuracy of 83%, LDA features 89% and a combination of the two gets 90%. However, training on full poems reduces this to 42—52%."
],
"highlighted_evidence": [
"The Style baseline achieves an Accuracy of 83%, LDA features 89% and a combination of the two gets 90%. However, training on full poems reduces this to 42—52%."
]
}
],
"annotation_id": [
"764826094a9ccf5268e8eddab5591eb190c1ed63"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"TextGrid Repository"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The Digital Library in the TextGrid Repository represents an extensive collection of German texts in digital form BIBREF3. It was mined from http://zeno.org and covers a time period from the mid 16th century up to the first decades of the 20th century. It contains many important texts that can be considered as part of the literary canon, even though it is far from complete (e.g. it contains only half of Rilke’s work). We find that around 51k texts are annotated with the label ’verse’ (TGRID-V), not distinguishing between ’lyric verse’ and ’epic verse’. However, the average length of these texts is around 150 token, dismissing most epic verse tales. Also, the poems are distributed over 229 authors, where the average author contributed 240 poems (median 131 poems). A drawback of TGRID-V is the circumstance that it contains a noticeable amount of French, Dutch and Latin (over 400 texts). To constrain our dataset to German, we filter foreign language material with a stopword list, as training a dedicated language identification classifier is far beyond the scope of this work."
],
"highlighted_evidence": [
"The Digital Library in the TextGrid Repository represents an extensive collection of German texts in digital form BIBREF3."
]
},
{
"unanswerable": false,
"extractive_spans": [
"The Digital Library in the TextGrid Repository"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The Digital Library in the TextGrid Repository represents an extensive collection of German texts in digital form BIBREF3. It was mined from http://zeno.org and covers a time period from the mid 16th century up to the first decades of the 20th century. It contains many important texts that can be considered as part of the literary canon, even though it is far from complete (e.g. it contains only half of Rilke’s work). We find that around 51k texts are annotated with the label ’verse’ (TGRID-V), not distinguishing between ’lyric verse’ and ’epic verse’. However, the average length of these texts is around 150 token, dismissing most epic verse tales. Also, the poems are distributed over 229 authors, where the average author contributed 240 poems (median 131 poems). A drawback of TGRID-V is the circumstance that it contains a noticeable amount of French, Dutch and Latin (over 400 texts). To constrain our dataset to German, we filter foreign language material with a stopword list, as training a dedicated language identification classifier is far beyond the scope of this work."
],
"highlighted_evidence": [
"The Digital Library in the TextGrid Repository represents an extensive collection of German texts in digital form BIBREF3. It was mined from http://zeno.org and covers a time period from the mid 16th century up to the first decades of the 20th century. It contains many important texts that can be considered as part of the literary canon, even though it is far from complete (e.g. it contains only half of Rilke’s work)."
]
}
],
"annotation_id": [
"2f9121fabcdac24875d9a6d5e5aa2c12232105a3",
"82c7475166ef613bc8d8ae561ed1fc9eead8820c"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
]
} | {
"caption": [
"Fig. 1: 25 year Time Slices of Textgrid Poetry (1575–1925)",
"Fig. 2: left: Topic 27 ’Virtue, Arts’ (Period: Enlightenment), right: Topic 55 ’Flowers, Spring, Garden’ (Period: Early Romanticism)",
"Fig. 3: left: Topic 63 ’Song’ (Period: Romanticism), right: Topic 33 ’German Nation’ (Period: Vormärz, Young Germany))",
"Fig. 4: left: Topic 28 ’Beautiful Girls’ (Period: Omnipresent, Romanticism), right: Topic 77 ’Life & Death’ (Period: Omnipresent, Barock",
"Fig. 5: left: Topic 60 ’Fire’ (Period: Modernity), right: Topic 42 ’Family’ (no period, fluctuating)",
"Fig. 6: Most informative topics for classification; left: Topic 11 ’World, Power, Lust, Time’ (Period: Barock), right: Topic 19 ’Heaven, Depth, Silence’ (Period: Romanticism, Modernity)"
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png",
"4-Figure5-1.png",
"4-Figure6-1.png"
]
} |
1810.05320 | Important Attribute Identification in Knowledge Graph | The knowledge graph(KG) composed of entities with their descriptions and attributes, and relationship between entities, is finding more and more application scenarios in various natural language processing tasks. In a typical knowledge graph like Wikidata, entities usually have a large number of attributes, but it is difficult to know which ones are important. The importance of attributes can be a valuable piece of information in various applications spanning from information retrieval to natural language generation. In this paper, we propose a general method of using external user generated text data to evaluate the relative importance of an entity's attributes. To be more specific, we use the word/sub-word embedding techniques to match the external textual data back to entities' attribute name and values and rank the attributes by their matching cohesiveness. To our best knowledge, this is the first work of applying vector based semantic matching to important attribute identification, and our method outperforms the previous traditional methods. We also apply the outcome of the detected important attributes to a language generation task; compared with previous generated text, the new method generates much more customized and informative messages. | {
"section_name": [
"The problem we solve in this paper",
"Related Research",
"What we propose and what we have done",
"Our proposed Method",
"Application Scenario",
"FastText Introduction",
"Matching",
"Data introduction",
"Data preprocessing",
"Proposed method vs previous methods",
"Result Analysis",
"Conclusions and Future work "
],
"paragraphs": [
[
"Knowledge graph(KG) has been proposed for several years and its most prominent application is in web search, for example, Google search triggers a certain entity card when a user's query matches or mentions an entity based on some statistical model. The core potential of a knowledge graph is about its capability of reasoning and inferring, and we have not seen revolutionary breakthrough in such areas yet. One main obstacle is obviously the lack of sufficient knowledge graph data, including entities, entities' descriptions, entities' attributes, and relationship between entities. A full functional knowledge graph supporting general purposed reasoning and inference might still require long years of the community's innovation and hardworking. On the other hand, many less demanding applications have great potential benefiting from the availability of information from the knowledge graph, such as query understanding and document understanding in information retrieval/search engines, simple inference in question answering systems, and easy reasoning in domain-limited decision support tools. Not only academy, but also industry companies have been heavily investing in knowledge graphs, such as Google's knowledge graph, Amazon's product graph, Facebook's Graph API, IBM's Watson, and Microsoft's Satori etc.",
"In the existing knowledge graph, such as Wikidata and DBpedia, usually attributes do not have order or priorities, and we don't know which attributes are more important and of more interest to users. Such importance score of attributes is a vital piece of information in many applications of knowledge graph. The most important application is the triggered entity card in search engine when a customer's query gets hit for an entity. An entity usually has a large amount of attributes, but an entity card has limited space and can only show the most significant information; attribute importance's presence can make the displaying of an entity card easy to implement. Attribute importance also has great potential of playing a significant role in search engine, how to decide the matching score between the query and attribute values. If the query matches a very important attribute, and the relevance contribution from such a match should be higher than matching an ignorable attribute. Another application is in e-commerce communications, and one buyer initiates a communication cycle with a seller by sending a product enquiry. Writing the enquiry on a mobile phone is inconvenient and automatic composing assistance has great potential of improving customer experience by alleviating the writing burden. In the product enquiry, customers need to specify their requirements and ask questions about products, and their requirements and questions are usually about the most important attributes of the products. If we can identify out important attributes of products, we can help customers to draft the enquiry automatically to reduce their input time."
],
[
"Many proposed approaches formulate the entity attribute ranking problem as a post processing step of automated attribute-value extraction. In BIBREF0 , BIBREF1 , BIBREF2 , Pasca et al. firstly extract potential class-attribute pairs using linguistically motivated patterns from unstructured text including query logs and query sessions, and then score the attributes using the Bayes model. In BIBREF3 , Rahul Rai proposed to identify product attributes from customer online reviews using part-of-speech(POS) tagging patterns, and to evaluate their importance with several different frequency metrics. In BIBREF4 , Lee et al. developed a system to extract concept-attribute pairs from multiple data sources, such as Probase, general web documents, query logs and external knowledge base, and aggregate the weights from different sources into one consistent typicality score using a Ranking SVM model. Those approaches typically suffer from the poor quality of the pattern rules, and the ranking process is used to identify relatively more precise attributes from all attribute candidates.",
"As for an already existing knowledge graph, there is plenty of work in literature dealing with ranking entities by relevance without or with a query. In BIBREF5 , Li et al. introduced the OntoRank algorithm for ranking the importance of semantic web objects at three levels of granularity: document, terms and RDF graphs. The algorithm is based on the rational surfer model, successfully used in the Swoogle semantic web search engine. In BIBREF6 , Hogan et al. presented an approach that adapted the well-known PageRank/HITS algorithms to semantic web data, which took advantage of property values to rank entities. In BIBREF7 , BIBREF8 , authors also focused on ranking entities, sorting the semantic web resources based on importance, relevance and query length, and aggregating the features together with an overall ranking model.",
"Just a few works were designated to specifically address the problem of computing attribute rankings in a given Knowledge Graph. Ibminer BIBREF9 introduced a tool for infobox(alias of an entity card) template suggestion, which collected attributes from different sources and then sorted them by popularity based on their co-occurrences in the dataset. In BIBREF10 , using the structured knowledge base, intermediate features were computed, including the importance or popularity of each entity type, IDF computation for each attribute on a global basis, IDF computation for entity types etc., and then the features were aggregated to train a classifier. Also, a similar approach in BIBREF11 was designed with more features extracted from GoogleSuggestChars data. In BIBREF12 , Ali et al. introduced a new set of features that utilizes semantic information about entities as well as information from top-ranked documents from a general search engine. In order to experiment their approach, they collected a dataset by exploiting Wikipedia infoboxes, whose ordering of attributes reflect the collaborative effort of a large community of users, which might not be accurate."
],
[
"There have been broad researches on entity detection, relationship extraction, and also missing relationship prediction. For example: BIBREF13 , BIBREF14 and BIBREF15 explained how to construct a knowledge graph and how to perform representation learning on knowledge graphs. Some research has been performed on attribute extraction, such as BIBREF16 and BIBREF4 ; the latter one is quite special that it also simultaneously computes the attribute importance. As for modeling attribute importance for an existing knowledge graph which has completed attribute extractions, we found only a few existing research, all of which used simple co-occurrences to rank entity attributes. In reality, many knowledge graphs do not contain attribute importance information, for example, in the most famous Wikidata, a large amount of entities have many attributes, and it is difficult to know which attributes are significant and deserve more attention. In this research we focus on identifying important attributes in existing knowledge graphs. Specifically, we propose a new method of using extra user generated data source for evaluating the attribute importance, and we use the recently proposed state-of-the-art word/sub-word embedding techniques to match the external data with the attribute definition and values from entities in knowledge graphs. And then we use the statistics obtained from the matching to compare the attribute importance. Our method has general extensibility to any knowledge graph without attribute importance. When there is a possibility of finding external textual data source, our proposed method will work, even if the external data does not exactly match the attribute textual data, since the vector embedding performs semantic matching and does not require exact string matching.",
"The remaining of the paper is organized as follows: Section SECREF2 explains our proposed method in detail, including what kind of external data is required, and how to process the external data, and also how to perform the semantic matching and how to rank the attributes by statistics. Section SECREF3 introduces our experimentations, including our experimentation setup, data introduction and experimental result compared to other methods we do not employ. Section SECREF3 also briefly introduces our real world application scenario in e-commerce communication. Section SECREF4 draws the conclusion from our experimentations and analysis, and also we point out promising future research directions."
],
[
"In this section, we will introduce our proposed method in detail. We use our application scenario to explain the logic behind the method, but the scope is not limited to our use case, and it is possible to extend to any existing knowledge graph without attribute importance information."
],
[
"Alibaba.com is currently the world's largest cross-border business to business(B2B) E-commerce platform and it supports 17 languages for customers from all over the world. On the website, English is the dorminant language and accounts for around 50% of the traffic. The website has already accumulated a very large knowledge graph of products, and the entity here is the product or the product category; and every entity has lots of information such as the entity name, images and many attributes without ordering information. The entities are also connected by taxonomy structure and similar products usually belong to the same category/sub-category.",
"Since the B2B procurement usually involves a large amount of money, the business will be a long process beginning with a product enquiry. Generally speaking, when customers are interested in some product, they will start a communication cycle with a seller by sending a product enquiry to the seller. In the product enquiry, customers will specify their requirements and ask questions about the product. Their requirements and questions usually refer to the most important attributes of the product. Fig. FIGREF5 shows an enquery example. Alibaba.com has accumulated tens of millions of product enquires, and we would like to leverage these information, in combination of the product knowledge graph we have, to figure out the most important attributes for each category of products.",
"In our application scenario, the product knowledge graph is the existing knowledge graph and the enquiry data is the external textual data source. From now on, we will use our application scenario to explain the details of our proposed algorithm.",
"We propose an unsupervised learning framework for extracting important product attributes from product enquiries. By calculating the semantic similarity between each enquiry sentence and each attribute of the product to which the enquiry corresponds to, we identify the product attributes that the customer cares about most.",
"The attributes described in the enquiry may contain attribute names or attribute values or other expressions, for example, either the word “color” or a color instance word “purple” is mentioned. Therefore, when calculating the semantic similarity between enquiry sentences and product attributes, we need both attribute names and attribute values. The same as any other knowledge graph, the product attributes in our knowledge graph we use contain noises and mistakes. We need to clean and normalize the attribute data before consuming it. We will introduce the detail of our data cleaning process in Section SECREF14 ."
],
[
"FastText is a library created by the Facebook Research for efficient learning of word representations and sentence classification. Here, we just use the word representation functionality of it.",
"FastText models morphology by considering subword units, and representing words by a sum of its character n-grams BIBREF17 . In the original model the authors choose to use the binary logistic loss and the loss for a single instance is written as below: INLINEFORM0 ",
"By denoting the logistic loss function INLINEFORM0 , the loss over a sentence is: INLINEFORM1 ",
"The scoring function between a word INLINEFORM0 and a context word INLINEFORM1 is: INLINEFORM2 ",
"In the above functions, INLINEFORM0 is a set of negative examples sampled from the vocabulary, INLINEFORM1 is the set of indices of words surrounding word INLINEFORM2 , INLINEFORM3 is the set of n-grams appearing in word INLINEFORM4 , INLINEFORM5 is the size of the dictionary we have for n-grams, INLINEFORM6 is a vector representation to each n-gram INLINEFORM7 .",
"Compared with word2vec or glove, FastText has following advantages:",
"It is able to cover rare words and out-of-vocabulary(OOV) words. Since the basic modeling units in FastText are ngrams, and both rare words and OOV ones can obtain efficient word representations from their composing ngrams. Word2vec and glove both fail to provide accurate vector representations for these words. In our application, the training data is written by end customers, and there are many misspellings which easily become OOV words.",
"Character n-grams embeddings tend to perform superior to word2vec and glove on smaller datasets.",
"FastText is more efficient and its training is relatively fast."
],
[
"In this section, how to compute the matching between an enquiry sentence and a product attribute is explained in detail. Our explanation here is for a certain product category, and other categories are the same.",
"As you can see in Fig. FIGREF12 , each sentence is compared with each attribute of a product category that the product belongs to. We now get a score between a sentence INLINEFORM0 and an attribute INLINEFORM1 , INLINEFORM2 INLINEFORM3 ",
"where INLINEFORM0 is all the possible values for this INLINEFORM1 , INLINEFORM2 is the word vector for INLINEFORM3 . According to this formula, we can get top two attributes whose scores are above the threshold INLINEFORM4 for each sentence. We choose two attributes instead of one because there may be more than one attribute for each sentence. In addition, some sentences are greetings or self-introduction and do not contain the attribute information of the product, so we require that the score to be higher than a certain threshold."
],
[
"For our knowledge graph data, entity(product) attributes can be roughly divided into clusters of transaction order specific ones and product specific ones, in this paper, we choose the product specific ones for further study. We also need to point out that we only focus on the recommended communication language on the Alibaba.com platform, which is English.",
"To construct the evaluation dataset, top 14 categories are first chosen based on their business promotion features, and 3 millions typical products under each category were then chosen to form the attribute candidates. After preprocessing and basic filtering, top product specific attributes from the 14 different categories are chosen to be manually labeled by our annotators.",
"For each category, annotators each are asked to choose at most 10 important attributes from buyers perspective. After all annotators complete their annotations, attributes are then sorted according to the summed votes. In the end, 111 important attributes from the 14 categories are kept for final evaluation.",
"Outside of the evaluation explained in this paper, we actually have performed the matching on more than 4,000 catetories covering more than 100 million products and more than 20 million enquires. Due to limited annotation resources, we can only sample a small numbered categories(14 here) to evaluate the proposed algorithm here."
],
[
"The product enquiries and attributes data preprocessing is shown in Algorithm 1. algorithmAlgorithm Data Preprocess Algorithm [1] INLINEFORM0 INLINEFORM1 : INLINEFORM2 INLINEFORM3 INLINEFORM4 : INLINEFORM5 Invalid INLINEFORM6 filter INLINEFORM7 Split INLINEFORM8 to sentences sentence INLINEFORM9 in INLINEFORM10 INLINEFORM11 INLINEFORM12 return INLINEFORM13 ",
"Firstly, for every product enquiry, we convert the original html textual data into the plain text. Secondly we filter out the useless enquires, such as non-English enquires and spams. The regular expressions and spam detection are used to detect non-English enquiries and spams respectively. Thirdly we get sentence list INLINEFORM0 with spliting every enquiry into sentences as described in section 2.2. Then for every sentence INLINEFORM1 in INLINEFORM2 , we need to do extra three processes: a)Spelling Correction. b)Regular Measures and Numbers. c)Stop Words Dropping.",
"Spelling Correction. Since quite a lot of the product enquires and self-filled attributes were misspelled, we have replaced the exact words by fuzzyfied search using Levenshtein distance. The method uses fuzzyfied search, only if the exact match is not found. Some attributes are actually the same, such as \"type\" and \"product type\", we merge these same attributes by judging whether the attributes are contained.",
"Regular Measures and Numbers. Attributes of number type have their values composed of numbers and units, such as INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , etc. We replace all numbers (in any notation, e.g., floating point, scientific, arithmetical expression, etc.) with a unique token ( INLINEFORM4 ). For the same reason, each unit of measure is replaced with a corresponding token, eg., INLINEFORM5 is replaced with centimeter area.",
"Stop Words Dropping. Stop words appear to be of little value in the proposed matching algorithm. By removing the stop words we can focus on the important words instead. In our business scenario, we built a stop words list for foreign trade e-commerce.",
"Finally, we get the valid sentences INLINEFORM0 ."
],
[
"The existing co-occurrence methods do not suit our application scenario at all, since exact string matching is too strong a requirement and initial trial has shown its incompetency. In stead we implemented an improved version of their method based on TextRank as our baseline. In addition, we also tested multiple semantic matching algorithms for comparison with our chosen method.",
"TextRank: TextRank is a graph-based ranking model for text processing. BIBREF18 It is an unsupervised algorithm for keyword extraction. Since product attributes are usually the keywords in enquiries, we can compare these keywords with the category attributes and find the most important attributes. This method consists of three steps. The first step is to merge all enquiries under one category as an article. The second step is to extract the top 50 keywords for each category. The third step is to find the most important attributes by comparing top keywords with category attributes.",
"Word2vec BIBREF19 : We use the word vector trained by BIBREF19 as the distributed representation of words. Then we get the enquiry sentence representation and category attribute representation. Finally we collect the statistics about the matched attributes of each category, and select the most frequent attributes under the same category.",
"GloVe BIBREF20 : GloVe is a global log-bilinear regression model for the unsupervised learning of word representations, which utilizes the ratios of word-word co-occurrence probabilities. We use the GloVe method to train the distributed representation of words. And attribute selection procedure is the same as word2vec.",
"Proposed method: the detail of our proposed algorithm has been carefully explained in Section SECREF2 . There are several thresholds we need to pick in the experimentation setup. Based on trial and error analysis, we choose 0.75 as the sentence and attribute similarity threshold, which balances the precision and recall relatively well. In our application, due to product enquiry length limitation, customers usually don't refer to more than five attributes in their initial approach to the seller, we choose to keep 5 most important attributes for each category.",
"Evaluation is conducted by comparing the output of the systems with the manual annotated answers, and we calculate the precision and recall rate. INLINEFORM0 INLINEFORM1 ",
"where INLINEFORM0 is the manually labeled attributes , INLINEFORM1 is the detected important attributes.",
"Table 1 depicts the algorithm performance of each category and the overall average metrics among all categories for our approach and other methods. It can be observed that our proposed method achieves the best performance. The average F1-measure of our approach is 0.47, while the average F1-measure values of “GloVe”, “word2vect” and \"TextRank\" are 0.46, 0.42 and 0.20 respectively."
],
[
"In all our experiments, we find that FastText method outperforms other methods. By analyzing all results, we observe that semantic similarity based methods are more effective than the previous method which we implemented based on TextRank. This conclusion is understandable because lots of enquiries do not simply mention attribute words exactly, but some semantically related words are also used.",
"Evaluating FastText, GloVe and word2vec, we show that compared to other word representation learning algorithms, the FastText performs best. We sample and analyze the category attributes and find that many self-filled attributes contain misspellings. The FastText algorithm represents words by a sum of its character n-grams and it is much robust against problems like misspellings. In summary, FastText has greater advantages in dealing with natural language corpus usually with spelling mistakes.",
"We also applied the detected attributes in the automatic enquiry generation task and we obtained significantly better generated enquiries compared to previous rigid templates. Due to space limitation, we skip the explanation and leave it for future publications."
],
[
"In this paper, we proposed a new general method of identifying important attributes for entities from a knowledge graph. This is a relatively new task and our proposed method of using external textual data and performing semantic matching via word/sub-word embeddings obtained better result compared to other work of using naive string matching and counting. In addition, we also successfully applied the detected important attributes in our real world application of smart composing. In summary, the method is extensible to any knowledge graph without attribute importance information and outperforms previous method.",
"In future work, there are two major areas with potential of improving the detection accuracy. The first one is about sentence splitting. What we are trying to get is semantic cohesive unit, which can be used to match an attribute, and there might be more comprehensive method than the simple splitting by sentence ending punctuations. The second one is about improving the word embedding quality. We have implemented an in-house improved version of Fasttext, which is adapted to our data source. It is highly possible to use the improved word embedding on purpose of obtaining higher semantic matching precision. As for the application, we will try to use more statistical models in the natural language generation part of the smart composing framework of consuming the detected important attributes."
]
]
} | {
"question": [
"What are the traditional methods to identifying important attributes?",
"What do you use to calculate word/sub-word embeddings",
"What user generated text data do you use?"
],
"question_id": [
"cda4612b4bda3538d19f4b43dde7bc30c1eda4e5",
"e12674f0466f8c0da109b6076d9939b30952c7da",
"9fe6339c7027a1a0caffa613adabe8b5bb6a7d4a"
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"automated attribute-value extraction",
"score the attributes using the Bayes model",
"evaluate their importance with several different frequency metrics",
"aggregate the weights from different sources into one consistent typicality score using a Ranking SVM model",
"OntoRank algorithm"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Many proposed approaches formulate the entity attribute ranking problem as a post processing step of automated attribute-value extraction. In BIBREF0 , BIBREF1 , BIBREF2 , Pasca et al. firstly extract potential class-attribute pairs using linguistically motivated patterns from unstructured text including query logs and query sessions, and then score the attributes using the Bayes model. In BIBREF3 , Rahul Rai proposed to identify product attributes from customer online reviews using part-of-speech(POS) tagging patterns, and to evaluate their importance with several different frequency metrics. In BIBREF4 , Lee et al. developed a system to extract concept-attribute pairs from multiple data sources, such as Probase, general web documents, query logs and external knowledge base, and aggregate the weights from different sources into one consistent typicality score using a Ranking SVM model. Those approaches typically suffer from the poor quality of the pattern rules, and the ranking process is used to identify relatively more precise attributes from all attribute candidates.",
"As for an already existing knowledge graph, there is plenty of work in literature dealing with ranking entities by relevance without or with a query. In BIBREF5 , Li et al. introduced the OntoRank algorithm for ranking the importance of semantic web objects at three levels of granularity: document, terms and RDF graphs. The algorithm is based on the rational surfer model, successfully used in the Swoogle semantic web search engine. In BIBREF6 , Hogan et al. presented an approach that adapted the well-known PageRank/HITS algorithms to semantic web data, which took advantage of property values to rank entities. In BIBREF7 , BIBREF8 , authors also focused on ranking entities, sorting the semantic web resources based on importance, relevance and query length, and aggregating the features together with an overall ranking model."
],
"highlighted_evidence": [
"In BIBREF0 , BIBREF1 , BIBREF2 , Pasca et al. firstly extract potential class-attribute pairs using linguistically motivated patterns from unstructured text including query logs and query sessions, and then score the attributes using the Bayes model. In BIBREF3 , Rahul Rai proposed to identify product attributes from customer online reviews using part-of-speech(POS) tagging patterns, and to evaluate their importance with several different frequency metrics. In BIBREF4 , Lee et al. developed a system to extract concept-attribute pairs from multiple data sources, such as Probase, general web documents, query logs and external knowledge base, and aggregate the weights from different sources into one consistent typicality score using a Ranking SVM model.",
"In BIBREF5 , Li et al. introduced the OntoRank algorithm for ranking the importance of semantic web objects at three levels of granularity: document, terms and RDF graphs. The algorithm is based on the rational surfer model, successfully used in the Swoogle semantic web search engine."
]
},
{
"unanswerable": false,
"extractive_spans": [
"TextRank",
"Word2vec BIBREF19",
"GloVe BIBREF20"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The existing co-occurrence methods do not suit our application scenario at all, since exact string matching is too strong a requirement and initial trial has shown its incompetency. In stead we implemented an improved version of their method based on TextRank as our baseline. In addition, we also tested multiple semantic matching algorithms for comparison with our chosen method.",
"TextRank: TextRank is a graph-based ranking model for text processing. BIBREF18 It is an unsupervised algorithm for keyword extraction. Since product attributes are usually the keywords in enquiries, we can compare these keywords with the category attributes and find the most important attributes. This method consists of three steps. The first step is to merge all enquiries under one category as an article. The second step is to extract the top 50 keywords for each category. The third step is to find the most important attributes by comparing top keywords with category attributes.",
"Word2vec BIBREF19 : We use the word vector trained by BIBREF19 as the distributed representation of words. Then we get the enquiry sentence representation and category attribute representation. Finally we collect the statistics about the matched attributes of each category, and select the most frequent attributes under the same category.",
"GloVe BIBREF20 : GloVe is a global log-bilinear regression model for the unsupervised learning of word representations, which utilizes the ratios of word-word co-occurrence probabilities. We use the GloVe method to train the distributed representation of words. And attribute selection procedure is the same as word2vec."
],
"highlighted_evidence": [
"The existing co-occurrence methods do not suit our application scenario at all, since exact string matching is too strong a requirement and initial trial has shown its incompetency. In stead we implemented an improved version of their method based on TextRank as our baseline. In addition, we also tested multiple semantic matching algorithms for comparison with our chosen method.\n\nTextRank: TextRank is a graph-based ranking model for text processing. BIBREF18 It is an unsupervised algorithm for keyword extraction. Since product attributes are usually the keywords in enquiries, we can compare these keywords with the category attributes and find the most important attributes. This method consists of three steps. The first step is to merge all enquiries under one category as an article. The second step is to extract the top 50 keywords for each category. The third step is to find the most important attributes by comparing top keywords with category attributes.\n\nWord2vec BIBREF19 : We use the word vector trained by BIBREF19 as the distributed representation of words. Then we get the enquiry sentence representation and category attribute representation. Finally we collect the statistics about the matched attributes of each category, and select the most frequent attributes under the same category.\n\nGloVe BIBREF20 : GloVe is a global log-bilinear regression model for the unsupervised learning of word representations, which utilizes the ratios of word-word co-occurrence probabilities. We use the GloVe method to train the distributed representation of words. And attribute selection procedure is the same as word2vec."
]
}
],
"annotation_id": [
"62cc433cc7693e311f8a12ef4bbd86ceb8ba77fa",
"f7fd8ac10bd6556a2c753379861d62f8d46fe550"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"FastText"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Evaluating FastText, GloVe and word2vec, we show that compared to other word representation learning algorithms, the FastText performs best. We sample and analyze the category attributes and find that many self-filled attributes contain misspellings. The FastText algorithm represents words by a sum of its character n-grams and it is much robust against problems like misspellings. In summary, FastText has greater advantages in dealing with natural language corpus usually with spelling mistakes."
],
"highlighted_evidence": [
"Evaluating FastText, GloVe and word2vec, we show that compared to other word representation learning algorithms, the FastText performs best."
]
}
],
"annotation_id": [
"31f2fbb7e7be42290f75e8b139895aad95ba7b2b"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"6fcabf6ee6c8beaf30235688753d861e61de5c56"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Fig. 1. A typical product enquiry example on Alibaba.com",
"Fig. 2. Each sentence obtained from the enquiry is scored against possible attributes under that category.",
"Table 1. Proposed method vs other methods metrics: precision, recall and F1-score."
],
"file": [
"5-Figure1-1.png",
"7-Figure2-1.png",
"10-Table1-1.png"
]
} |
2003.08529 | Diversity, Density, and Homogeneity: Quantitative Characteristic Metrics for Text Collections | Summarizing data samples by quantitative measures has a long history, with descriptive statistics being a case in point. However, as natural language processing methods flourish, there are still insufficient characteristic metrics to describe a collection of texts in terms of the words, sentences, or paragraphs they comprise. In this work, we propose metrics of diversity, density, and homogeneity that quantitatively measure the dispersion, sparsity, and uniformity of a text collection. We conduct a series of simulations to verify that each metric holds desired properties and resonates with human intuitions. Experiments on real-world datasets demonstrate that the proposed characteristic metrics are highly correlated with text classification performance of a renowned model, BERT, which could inspire future applications. | {
"section_name": [
"Introduction",
"Related Work",
"Proposed Characteristic Metrics",
"Proposed Characteristic Metrics ::: Diversity",
"Proposed Characteristic Metrics ::: Density",
"Proposed Characteristic Metrics ::: Homogeneity",
"Simulations",
"Simulations ::: Simulation Setup",
"Simulations ::: Simulation Results",
"Experiments",
"Experiments ::: Chosen Embedding Method",
"Experiments ::: Experimental Setup",
"Experiments ::: Experimental Results",
"Experiments ::: Experimental Results ::: SST-2",
"Experiments ::: Experimental Results ::: Snips",
"Analysis",
"Conclusions"
],
"paragraphs": [
[
"Characteristic metrics are a set of unsupervised measures that quantitatively describe or summarize the properties of a data collection. These metrics generally do not use ground-truth labels and only measure the intrinsic characteristics of data. The most prominent example is descriptive statistics that summarizes a data collection by a group of unsupervised measures such as mean or median for central tendency, variance or minimum-maximum for dispersion, skewness for symmetry, and kurtosis for heavy-tailed analysis.",
"In recent years, text classification, a category of Natural Language Processing (NLP) tasks, has drawn much attention BIBREF0, BIBREF1, BIBREF2 for its wide-ranging real-world applications such as fake news detection BIBREF3, document classification BIBREF4, and spoken language understanding (SLU) BIBREF5, BIBREF6, BIBREF7, a core task of conversational assistants like Amazon Alexa or Google Assistant.",
"However, there are still insufficient characteristic metrics to describe a collection of texts. Unlike numeric or categorical data, simple descriptive statistics alone such as word counts and vocabulary size are difficult to capture the syntactic and semantic properties of a text collection.",
"In this work, we propose a set of characteristic metrics: diversity, density, and homogeneity to quantitatively summarize a collection of texts where the unit of texts could be a phrase, sentence, or paragraph. A text collection is first mapped into a high-dimensional embedding space. Our characteristic metrics are then computed to measure the dispersion, sparsity, and uniformity of the distribution. Based on the choice of embedding methods, these characteristic metrics can help understand the properties of a text collection from different linguistic perspectives, for example, lexical diversity, syntactic variation, and semantic homogeneity. Our proposed diversity, density, and homogeneity metrics extract hard-to-visualize quantitative insight for a better understanding and comparison between text collections.",
"To verify the effectiveness of proposed characteristic metrics, we first conduct a series of simulation experiments that cover various scenarios in two-dimensional as well as high-dimensional vector spaces. The results show that our proposed quantitative characteristic metrics exhibit several desirable and intuitive properties such as robustness and linear sensitivity of the diversity metric with respect to random down-sampling. Besides, we investigate the relationship between the characteristic metrics and the performance of a renowned model, BERT BIBREF8, on the text classification task using two public benchmark datasets. Our results demonstrate that there are high correlations between text classification model performance and the characteristic metrics, which shows the efficacy of our proposed metrics."
],
[
"A building block of characteristic metrics for text collections is the language representation method. A classic way to represent a sentence or a paragraph is n-gram, with dimension equals to the size of vocabulary. More advanced methods learn a relatively low dimensional latent space that represents each word or token as a continuous semantic vector such as word2vec BIBREF9, GloVe BIBREF10, and fastText BIBREF11. These methods have been widely adopted with consistent performance improvements on many NLP tasks. Also, there has been extensive research on representing a whole sentence as a vector such as a plain or weighted average of word vectors BIBREF12, skip-thought vectors BIBREF13, and self-attentive sentence encoders BIBREF14.",
"More recently, there is a paradigm shift from non-contextualized word embeddings to self-supervised language model (LM) pretraining. Language encoders are pretrained on a large text corpus using a LM-based objective and then re-used for other NLP tasks in a transfer learning manner. These methods can produce contextualized word representations, which have proven to be effective for significantly improving many NLP tasks. Among the most popular approaches are ULMFiT BIBREF2, ELMo BIBREF15, OpenAI GPT BIBREF16, and BERT BIBREF8. In this work, we adopt BERT, a transformer-based technique for NLP pretraining, as the backbone to embed a sentence or a paragraph into a representation vector.",
"Another stream of related works is the evaluation metrics for cluster analysis. As measuring property or quality of outputs from a clustering algorithm is difficult, human judgment with cluster visualization tools BIBREF17, BIBREF18 are often used. There are unsupervised metrics to measure the quality of a clustering result such as the Calinski-Harabasz score BIBREF19, the Davies-Bouldin index BIBREF20, and the Silhouette coefficients BIBREF21. Complementary to these works that model cross-cluster similarities or relationships, our proposed diversity, density and homogeneity metrics focus on the characteristics of each single cluster, i.e., intra cluster rather than inter cluster relationships."
],
[
"We introduce our proposed diversity, density, and homogeneity metrics with their detailed formulations and key intuitions.",
"Our first assumption is, for classification, high-quality training data entail that examples of one class are as differentiable and distinct as possible from another class. From a fine-grained and intra-class perspective, a robust text cluster should be diverse in syntax, which is captured by diversity. And each example should reflect a sufficient signature of the class to which it belongs, that is, each example is representative and contains certain salient features of the class. We define a density metric to account for this aspect. On top of that, examples should also be semantically similar and coherent among each other within a cluster, where homogeneity comes in play.",
"The more subtle intuition emerges from the inter-class viewpoint. When there are two or more class labels in a text collection, in an ideal scenario, we would expect the homogeneity to be monotonically decreasing. Potentially, the diversity is increasing with respect to the number of classes since text clusters should be as distinct and separate as possible from one another. If there is a significant ambiguity between classes, the behavior of the proposed metrics and a possible new metric as a inter-class confusability measurement remain for future work.",
"In practice, the input is a collection of texts $\\lbrace x_1, x_2, ..., x_m\\rbrace $, where $x_i$ is a sequence of tokens $x_{i1}, x_{i2}, ..., x_{il}$ denoting a phrase, a sentence, or a paragraph. An embedding method $\\mathcal {E}$ then transforms $x_i$ into a vector $\\mathcal {E}(x_i)=e_i$ and the characteristic metrics are computed with the embedding vectors. For example,",
"Note that these embedding vectors often lie in a high-dimensional space, e.g. commonly over 300 dimensions. This motivates our design of characteristic metrics to be sensitive to text collections of different properties while being robust to the curse of dimensionality.",
"We then assume a set of clusters created over the generated embedding vectors. In classification tasks, the embeddings pertaining to members of a class form a cluster, i.e., in a supervised setting. In an unsupervised setting, we may apply a clustering algorithm to the embeddings. It is worth noting that, in general, the metrics are independent of the assumed underlying grouping method."
],
[
"Embedding vectors of a given group of texts $\\lbrace e_1, ..., e_m\\rbrace $ can be treated as a cluster in the high-dimensional embedding space. We propose a diversity metric to estimate the cluster's dispersion or spreadness via a generalized sense of the radius.",
"Specifically, if a cluster is distributed as a multi-variate Gaussian with a diagonal covariance matrix $\\Sigma $, the shape of an isocontour will be an axis-aligned ellipsoid in $\\mathbb {R}^{H}$. Such isocontours can be described as:",
"where $x$ are all possible points in $\\mathbb {R}^{H}$ on an isocontour, $c$ is a constant, $\\mu $ is a given mean vector with $\\mu _j$ being the value along $j$-th axis, and $\\sigma ^2_j$ is the variance of the $j$-th axis.",
"We leverage the geometric interpretation of this formulation and treat the square root of variance, i.e., standard deviation, $\\sqrt{\\sigma ^2_j}$ as the radius $r_j$ of the ellipsoid along the $j$-th axis. The diversity metric is then defined as the geometric mean of radii across all axes:",
"where $\\sigma _i$ is the standard deviation or square root of the variance along the $i$-th axis.",
"In practice, to compute a diversity metric, we first calculate the standard deviation of embedding vectors along each dimension and take the geometric mean of all calculated values. Note that as the geometric mean acts as a dimensionality normalization, it makes the diversity metric work well in high-dimensional embedding spaces such as BERT."
],
[
"Another interesting characteristic is the sparsity of the text embedding cluster. The density metric is proposed to estimate the number of samples that falls within a unit of volume in an embedding space.",
"Following the assumption mentioned above, a straight-forward definition of the volume can be written as:",
"up to a constant factor. However, when the dimension goes higher, this formulation easily produces exploding or vanishing density values, i.e., goes to infinity or zero.",
"To accommodate the impact of high-dimensionality, we impose a dimension normalization. Specifically, we introduce a notion of effective axes, which assumes most variance can be explained or captured in a sub-space of a dimension $\\sqrt{H}$. We group all the axes in this sub-space together and compute the geometric mean of their radii as the effective radius. The dimension-normalized volume is then formulated as:",
"Given a set of embedding vectors $\\lbrace e_1, ..., e_m\\rbrace $, we define the density metric as:",
"In practice, the computed density metric values often follow a heavy-tailed distribution, thus sometimes its $\\log $ value is reported and denoted as $density (log\\-scale)$."
],
[
"The homogeneity metric is proposed to summarize the uniformity of a cluster distribution. That is, how uniformly the embedding vectors of the samples in a group of texts are distributed in the embedding space. We propose to quantitatively describe homogeneity by building a fully-connected, edge-weighted network, which can be modeled by a Markov chain model. A Markov chain's entropy rate is calculated and normalized to be in $[0, 1]$ range by dividing by the entropy's theoretical upper bound. This output value is defined as the homogeneity metric detailed as follows:",
"To construct a fully-connected network from the embedding vectors $\\lbrace e_1, ..., e_m\\rbrace $, we compute their pairwise distances as edge weights, an idea similar to AttriRank BIBREF22. As the Euclidean distance is not a good metric in high-dimensions, we normalize the distance by adding a power $\\log (n\\_dim)$. We then define a Markov chain model with the weight of $edge(i, j)$ being",
"and the conditional probability of transition from $i$ to $j$ can be written as",
"All the transition probabilities $p(i \\rightarrow j)$ are from the transition matrix of a Markov chain. An entropy of this Markov chain can be calculated as",
"where $\\nu _i$ is the stationary distribution of the Markov chain. As self-transition probability $p(i \\rightarrow i)$ is always zero because of zero distance, there are $(m - 1)$ possible destinations and the entropy's theoretical upper bound becomes",
"Our proposed homogeneity metric is then normalized into $[0, 1]$ as a uniformity measure:",
"The intuition is that if some samples are close to each other but far from all the others, the calculated entropy decreases to reflect the unbalanced distribution. In contrast, if each sample can reach other samples within more-or-less the same distances, the calculated entropy as well as the homogeneity measure would be high as it implies the samples could be more uniformly distributed."
],
[
"To verify that each proposed characteristic metric holds its desirable and intuitive properties, we conduct a series of simulation experiments in 2-dimensional as well as 768-dimensional spaces. The latter has the same dimensionality as the output of our chosen embedding method-BERT, in the following Experiments section."
],
[
"The base simulation setup is a randomly generated isotropic Gaussian blob that contains $10,000$ data points with the standard deviation along each axis to be $1.0$ and is centered around the origin. All Gaussian blobs are created using make_blobs function in the scikit-learn package.",
"Four simulation scenarios are used to investigate the behavior of our proposed quantitative characteristic metrics:",
"Down-sampling: Down-sample the base cluster to be $\\lbrace 90\\%, 80\\%, ..., 10\\%\\rbrace $ of its original size. That is, create Gaussian blobs with $\\lbrace 9000, ..., 1000\\rbrace $ data points;",
"Varying Spread: Generate Gaussian blobs with standard deviations of each axis to be $\\lbrace 2.0, 3.0, ..., 10.0\\rbrace $;",
"Outliers: Add $\\lbrace 50, 100, ..., 500\\rbrace $ outlier data points, i.e., $\\lbrace 0.5\\%, ..., 5\\%\\rbrace $ of the original cluster size, randomly on the surface with a fixed norm or radius;",
"Multiple Sub-clusters: Along the 1th-axis, with $10,000$ data points in total, create $\\lbrace 1, 2, ..., 10\\rbrace $ clusters with equal sample sizes but at increasing distance.",
"For each scenario, we simulate a cluster and compute the characteristic metrics in both 2-dimensional and 768-dimensional spaces. Figure FIGREF17 visualizes each scenario by t-distributed Stochastic Neighbor Embedding (t-SNE) BIBREF23. The 768-dimensional simulations are visualized by down-projecting to 50 dimensions via Principal Component Analysis (PCA) followed by t-SNE."
],
[
"Figure FIGREF24 summarizes calculated diversity metrics in the first row, density metrics in the second row, and homogeneity metrics in the third row, for all simulation scenarios.",
"The diversity metric is robust as its values remain almost the same to the down-sampling of an input cluster. This implies the diversity metric has a desirable property that it is insensitive to the size of inputs. On the other hand, it shows a linear relationship to varying spreads. It is another intuitive property for a diversity metric that it grows linearly with increasing dispersion or variance of input data. With more outliers or more sub-clusters, the diversity metric can also reflect the increasing dispersion of cluster distributions but is less sensitive in high-dimensional spaces.",
"For the density metrics, it exhibits a linear relationship to the size of inputs when down-sampling, which is desired. When increasing spreads, the trend of density metrics corresponds well with human intuition. Note that the density metrics decrease at a much faster rate in higher-dimensional space as log-scale is used in the figure. The density metrics also drop when adding outliers or having multiple distant sub-clusters. This makes sense since both scenarios should increase the dispersion of data and thus increase our notion of volume as well. In multiple sub-cluster scenario, the density metric becomes less sensitive in the higher-dimensional space. The reason could be that the sub-clusters are distributed only along one axis and thus have a smaller impact on volume in higher-dimensional spaces.",
"As random down-sampling or increasing variance of each axis should not affect the uniformity of a cluster distribution, we expect the homogeneity metric remains approximately the same values. And the proposed homogeneity metric indeed demonstrates these ideal properties. Interestingly, for outliers, we first saw huge drops of the homogeneity metric but the values go up again slowly when more outliers are added. This corresponds well with our intuitions that a small number of outliers break the uniformity but more outliers should mean an increase of uniformity because the distribution of added outliers themselves has a high uniformity.",
"For multiple sub-clusters, as more sub-clusters are presented, the homogeneity should and does decrease as the data are less and less uniformly distributed in the space.",
"To sum up, from all simulations, our proposed diversity, density, and homogeneity metrics indeed capture the essence or intuition of dispersion, sparsity, and uniformity in a cluster distribution."
],
[
"The two real-world text classification tasks we used for experiments are sentiment analysis and Spoken Language Understanding (SLU)."
],
[
"BERT is a self-supervised language model pretraining approach based on the Transformer BIBREF24, a multi-headed self-attention architecture that can produce different representation vectors for the same token in various sequences, i.e., contextual embeddings.",
"When pretraining, BERT concatenates two sequences as input, with special tokens $[CLS], [SEP], [EOS]$ denoting the start, separation, and end, respectively. BERT is then pretrained on a large unlabeled corpus with objective-masked language model (MLM), which randomly masks out tokens, and the model predicts the masked tokens. The other classification task is next sentence prediction (NSP). NSP is to predict whether two sequences follow each other in the original text or not.",
"In this work, we use the pretrained $\\text{BERT}_{\\text{BASE}}$ which has 12 layers (L), 12 self-attention heads (A), and 768 hidden dimension (H) as the language embedding to compute the proposed data metrics. The off-the-shelf pretrained BERT is obtained from GluonNLP. For each sequence $x_i = (x_{i1}, ..., x_{il})$ with length $l$, BERT takes $[CLS], x_{i1}, ..., x_{il}, [EOS]$ as input and generates embeddings $\\lbrace e_{CLS}, e_{i1}, ..., e_{il}, e_{EOS}\\rbrace $ at the token level. To obtain the sequence representation, we use a mean pooling over token embeddings:",
"where $e_i \\in \\mathbb {R}^{H}$. A text collection $\\lbrace x_1, ..., x_m\\rbrace $, i.e., a set of token sequences, is then transformed into a group of H-dimensional vectors $\\lbrace e_1, ..., e_m\\rbrace $.",
"We compute each metric as described previously, using three BERT layers L1, L6, and L12 as the embedding space, respectively. The calculated metric values are averaged over layers for each class and averaged over classes weighted by class size as the final value for a dataset."
],
[
"In the first task, we use the SST-2 (Stanford Sentiment Treebank, version 2) dataset BIBREF25 to conduct sentiment analysis experiments. SST-2 is a sentence binary classification dataset with train/dev/test splits provided and two types of sentence labels, i.e., positive and negative.",
"The second task involves two essential problems in SLU, which are intent classification (IC) and slot labeling (SL). In IC, the model needs to detect the intention of a text input (i.e., utterance, conveys). For example, for an input of I want to book a flight to Seattle, the intention is to book a flight ticket, hence the intent class is bookFlight. In SL, the model needs to extract the semantic entities that are related to the intent. From the same example, Seattle is a slot value related to booking the flight, i.e., the destination. Here we experiment with the Snips dataset BIBREF26, which is widely used in SLU research. This dataset contains test spoken utterances (text) classified into one of 7 intents.",
"In both tasks, we used the open-sourced GluonNLP BERT model to perform text classification. For evaluation, sentiment analysis is measured in accuracy, whereas IC and SL are measured in accuracy and F1 score, respectively. BERT is fine-tuned on train/dev sets and evaluated on test sets.",
"We down-sampled SST-2 and Snips training sets from $100\\%$ to $10\\%$ with intervals being $10\\%$. BERT's performance is reported for each down-sampled setting in Table TABREF29 and Table TABREF30. We used entire test sets for all model evaluations.",
"To compare, we compute the proposed data metrics, i.e., diversity, density, and homogeneity, on the original and the down-sampled training sets."
],
[
"We will discuss the three proposed characteristic metrics, i.e., diversity, density, and homogeneity, and model performance scores from down-sampling experiments on the two public benchmark datasets, in the following subsections:"
],
[
"In Table TABREF29, the sentiment classification accuracy is $92.66\\%$ without down-sampling, which is consistent with the reported GluonNLP BERT model performance on SST-2. It also indicates SST-2 training data are differentiable between label classes, i.e., from the positive class to the negative class, which satisfies our assumption for the characteristic metrics.",
"Decreasing the training set size does not reduce performance until it is randomly down-sampled to only $20\\%$ of the original size. Meanwhile, density and homogeneity metrics also decrease significantly (highlighted in bold in Table TABREF29), implying a clear relationship between these metrics and model performance."
],
[
"In Table TABREF30, the Snips dataset seems to be distinct between IC/SL classes since the IC accurcy and SL F1 are as high as $98.71\\%$ and $96.06\\%$ without down-sampling, respectively. Similar to SST-2, this implies that Snips training data should also support the inter-class differentiability assumption for our proposed characteristic metrics.",
"IC accuracy on Snips remains higher than $98\\%$ until we down-sample the training set to $20\\%$ of the original size. In contrast, SL F1 score is more sensitive to the down-sampling of the training set, as it starts decreasing when down-sampling. When the training set is only $10\\%$ left, SL F1 score drops to $87.20\\%$.",
"The diversity metric does not decrease immediately until the training set equals to or is less than $40\\%$ of the original set. This implies that random sampling does not impact the diversity, if the sampling rate is greater than $40\\%$. The training set is very likely to contain redundant information in terms of text diversity. This is supported by what we observed as model has consistently high IC/SL performances between $40\\%$-$100\\%$ down-sampling ratios.",
"Moreover, the biggest drop of density and homogeneity (highlighted in bold in Table TABREF30) highly correlates with the biggest IC/SL drop, at the point the training set size is reduced from $20\\%$ to $10\\%$. This suggests that our proposed metrics can be used as a good indicator of model performance and for characterizing text datasets."
],
[
"We calculate and show in Table TABREF35 the Pearson's correlations between the three proposed characteristic metrics, i.e., diversity, density, and homogeneity, and model performance scores from down-sampling experiments in Table TABREF29 and Table TABREF30. Correlations higher than $0.5$ are highlighted in bold. As mentioned before, model performance is highly correlated with density and homogeneity, both are computed on the train set. Diversity is only correlated with Snips SL F1 score at a moderate level.",
"",
"These are consistent with our simulation results, which shows that random sampling of a dataset does not necessarily affect the diversity but can reduce the density and marginally homogeneity due to the decreasing of data points in the embedding space. However, the simultaneous huge drops of model performance, density, and homogeneity imply that there is only limited redundancy and more informative data points are being thrown away when down-sampling. Moreover, results also suggest that model performance on text classification tasks corresponds not only with data diversity but also with training data density and homogeneity as well."
],
[
"In this work, we proposed several characteristic metrics to describe the diversity, density, and homogeneity of text collections without using any labels. Pre-trained language embeddings are used to efficiently characterize text datasets. Simulation and experiments showed that our intrinsic metrics are robust and highly correlated with model performance on different text classification tasks. We would like to apply the diversity, density, and homogeneity metrics for text data augmentation and selection in a semi-supervised manner as our future work."
]
]
} | {
"question": [
"Did they propose other metrics?",
"Which real-world datasets did they use?",
"How did they obtain human intuitions?"
],
"question_id": [
"b5c3787ab3784214fc35f230ac4926fe184d86ba",
"9174aded45bc36915f2e2adb6f352f3c7d9ada8b",
"a8f1029f6766bffee38a627477f61457b2d6ed5c"
],
"nlp_background": [
"two",
"two",
"two"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"We introduce our proposed diversity, density, and homogeneity metrics with their detailed formulations and key intuitions."
],
"highlighted_evidence": [
"We introduce our proposed diversity, density, and homogeneity metrics with their detailed formulations and key intuitions."
]
}
],
"annotation_id": [
"384422d75e76804b2563b28bf4d4e0dde94d40b9"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"SST-2 (Stanford Sentiment Treebank, version 2)",
"Snips"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In the first task, we use the SST-2 (Stanford Sentiment Treebank, version 2) dataset BIBREF25 to conduct sentiment analysis experiments. SST-2 is a sentence binary classification dataset with train/dev/test splits provided and two types of sentence labels, i.e., positive and negative.",
"The second task involves two essential problems in SLU, which are intent classification (IC) and slot labeling (SL). In IC, the model needs to detect the intention of a text input (i.e., utterance, conveys). For example, for an input of I want to book a flight to Seattle, the intention is to book a flight ticket, hence the intent class is bookFlight. In SL, the model needs to extract the semantic entities that are related to the intent. From the same example, Seattle is a slot value related to booking the flight, i.e., the destination. Here we experiment with the Snips dataset BIBREF26, which is widely used in SLU research. This dataset contains test spoken utterances (text) classified into one of 7 intents."
],
"highlighted_evidence": [
"In the first task, we use the SST-2 (Stanford Sentiment Treebank, version 2) dataset BIBREF25 to conduct sentiment analysis experiments. SST-2 is a sentence binary classification dataset with train/dev/test splits provided and two types of sentence labels, i.e., positive and negative.",
"Here we experiment with the Snips dataset BIBREF26, which is widely used in SLU research. This dataset contains test spoken utterances (text) classified into one of 7 intents."
]
},
{
"unanswerable": false,
"extractive_spans": [
"SST-2",
"Snips"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In the first task, we use the SST-2 (Stanford Sentiment Treebank, version 2) dataset BIBREF25 to conduct sentiment analysis experiments. SST-2 is a sentence binary classification dataset with train/dev/test splits provided and two types of sentence labels, i.e., positive and negative.",
"The second task involves two essential problems in SLU, which are intent classification (IC) and slot labeling (SL). In IC, the model needs to detect the intention of a text input (i.e., utterance, conveys). For example, for an input of I want to book a flight to Seattle, the intention is to book a flight ticket, hence the intent class is bookFlight. In SL, the model needs to extract the semantic entities that are related to the intent. From the same example, Seattle is a slot value related to booking the flight, i.e., the destination. Here we experiment with the Snips dataset BIBREF26, which is widely used in SLU research. This dataset contains test spoken utterances (text) classified into one of 7 intents."
],
"highlighted_evidence": [
"In the first task, we use the SST-2 (Stanford Sentiment Treebank, version 2) dataset BIBREF25 to conduct sentiment analysis experiments.",
"Here we experiment with the Snips dataset BIBREF26, which is widely used in SLU research."
]
}
],
"annotation_id": [
"3325009492879b3a0055e221fffdecc8faf526ac",
"a51891bbee07eec43f09ab50a5774c1fffc4f24b"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"eac5880698001ea697ae5b2496c30db033e60b7c"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Figure 1: Visualization of the simulations including base setting, down-sampling, varying spreads, adding outliers, and multiple sub-clusters in 2-dimensional and 768-dimensional spaces.",
"Figure 2: Diversity, density, and homogeneity metric values in each simulation scenario.",
"Table 1: The experimental results of diversity, density, and homogeneity metrics with classification accuracy on the SST-2 dataset.",
"Table 2: The experimental results of diversity, density, and homogeneity metrics with intent classification (IC) accuracy and slot labeling (SL) F1 scores on the Snips dataset. Experimental setup is the same as that in Table 1.",
"Table 3: The Pearson’s correlation (Corr.) between proposed characteristic metrics (diversity, density, and homogeneity) and model accuracy (Acc.) or F1 scores from down-sampling experiments in Table 1 and Table 2."
],
"file": [
"4-Figure1-1.png",
"5-Figure2-1.png",
"6-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png"
]
} |
1708.05873 | What Drives the International Development Agenda? An NLP Analysis of the United Nations General Debate 1970-2016 | There is surprisingly little known about agenda setting for international development in the United Nations (UN) despite it having a significant influence on the process and outcomes of development efforts. This paper addresses this shortcoming using a novel approach that applies natural language processing techniques to countries' annual statements in the UN General Debate. Every year UN member states deliver statements during the General Debate on their governments' perspective on major issues in world politics. These speeches provide invaluable information on state preferences on a wide range of issues, including international development, but have largely been overlooked in the study of global politics. This paper identifies the main international development topics that states raise in these speeches between 1970 and 2016, and examine the country-specific drivers of international development rhetoric. | {
"section_name": [
"Introduction",
"The UN General Debate and international development",
"Estimation of topic models",
"Topics in the UN General Debate",
"Explaining the rhetoric",
"Conclusion"
],
"paragraphs": [
[
"Decisions made in international organisations are fundamental to international development efforts and initiatives. It is in these global governance arenas that the rules of the global economic system, which have a huge impact on development outcomes are agreed on; decisions are made about large-scale funding for development issues, such as health and infrastructure; and key development goals and targets are agreed on, as can be seen with the Millennium Development Goals (MDGs). More generally, international organisations have a profound influence on the ideas that shape international development efforts BIBREF0 .",
"Yet surprisingly little is known about the agenda-setting process for international development in global governance institutions. This is perhaps best demonstrated by the lack of information on how the different goals and targets of the MDGs were decided, which led to much criticism and concern about the global governance of development BIBREF1 . More generally, we know little about the types of development issues that different countries prioritise, or whether country-specific factors such as wealth or democracy make countries more likely to push for specific development issues to be put on the global political agenda.",
"The lack of knowledge about the agenda setting process in the global governance of development is in large part due to the absence of obvious data sources on states' preferences about international development issues. To address this gap we employ a novel approach based on the application of natural language processing (NLP) to countries' speeches in the UN. Every September, the heads of state and other high-level country representatives gather in New York at the start of a new session of the United Nations General Assembly (UNGA) and address the Assembly in the General Debate. The General Debate (GD) provides the governments of the almost two hundred UN member states with an opportunity to present their views on key issues in international politics – including international development. As such, the statements made during GD are an invaluable and, largely untapped, source of information on governments' policy preferences on international development over time.",
"An important feature of these annual country statements is that they are not institutionally connected to decision-making in the UN. This means that governments face few external constraints when delivering these speeches, enabling them to raise the issues that they consider the most important. Therefore, the General Debate acts “as a barometer of international opinion on important issues, even those not on the agenda for that particular session” BIBREF2 . In fact, the GD is usually the first item for each new session of the UNGA, and as such it provides a forum for governments to identify like-minded members, and to put on the record the issues they feel the UNGA should address. Therefore, the GD can be viewed as a key forum for governments to put different policy issues on international agenda.",
"We use a new dataset of GD statements from 1970 to 2016, the UN General Debate Corpus (UNGDC), to examine the international development agenda in the UN BIBREF3 . Our application of NLP to these statements focuses in particular on structural topic models (STMs) BIBREF4 . The paper makes two contributions using this approach: (1) It sheds light on the main international development issues that governments prioritise in the UN; and (2) It identifies the key country-specific factors associated with governments discussing development issues in their GD statements."
],
[
"In the analysis we consider the nature of international development issues raised in the UN General Debates, and the effect of structural covariates on the level of developmental rhetoric in the GD statements. To do this, we first implement a structural topic model BIBREF4 . This enables us to identify the key international development topics discussed in the GD. We model topic prevalence in the context of the structural covariates. In addition, we control for region fixed effects and time trend. The aim is to allow the observed metadata to affect the frequency with which a topic is discussed in General Debate speeches. This allows us to test the degree of association between covariates (and region/time effects) and the average proportion of a document discussing a topic."
],
[
"We assess the optimal number of topics that need to be specified for the STM analysis. We follow the recommendations of the original STM paper and focus on exclusivity and semantic coherence measures. BIBREF5 propose semantic coherence measure, which is closely related to point-wise mutual information measure posited by BIBREF6 to evaluate topic quality. BIBREF5 show that semantic coherence corresponds to expert judgments and more general human judgments in Amazon's Mechanical Turk experiments.",
"Exclusivity scores for each topic follows BIBREF7 . Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. Cohesive and exclusive topics are more semantically useful. Following BIBREF8 we generate a set of candidate models ranging between 3 and 50 topics. We then plot the exclusivity and semantic coherence (numbers closer to 0 indicate higher coherence), with a linear regression overlaid (Figure FIGREF3 ). Models above the regression line have a “better” exclusivity-semantic coherence trade off. We select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence. The topic quality is usually evaluated by highest probability words, which is presented in Figure FIGREF4 ."
],
[
"Figure FIGREF4 provides a list of the main topics (and the highest probability words associated these topics) that emerge from the STM of UN General Debate statements. In addition to the highest probability words, we use several other measures of key words (not presented here) to interpret the dimensions. This includes the FREX metric (which combines exclusivity and word frequency), the lift (which gives weight to words that appear less frequently in other topics), and the score (which divides the log frequency of the word in the topic by the log frequency of the word in other topics). We provide a brief description of each of the 16 topics here.",
"Topic 1 - Security and cooperation in Europe.",
"The first topic is related to issues of security and cooperation, with a focus on Central and Eastern Europe.",
"Topic 2 - Economic development and the global system.",
"This topic is related to economic development, particularly around the global economic system. The focus on `trade', `growth', `econom-', `product', `growth', `financ-', and etc. suggests that Topic 2 represent a more traditional view of international development in that the emphasis is specifically on economic processes and relations.",
"Topic 3 - Nuclear disarmament.",
"This topic picks up the issue of nuclear weapons, which has been a major issue in the UN since its founding.",
"Topic 4 - Post-conflict development.",
"This topic relates to post-conflict development. The countries that feature in the key words (e.g. Rwanda, Liberia, Bosnia) have experienced devastating civil wars, and the emphasis on words such as `develop', `peace', `hope', and `democrac-' suggest that this topic relates to how these countries recover and move forward.",
"Topic 5 - African independence / decolonisation.",
"This topic picks up the issue of African decolonisation and independence. It includes the issue of apartheid in South Africa, as well as racism and imperialism more broadly.",
"Topic 6 - Africa.",
"While the previous topic focused explicitly on issues of African independence and decolonisation, this topic more generally picks up issues linked to Africa, including peace, governance, security, and development.",
"Topic 7 - Sustainable development.",
"This topic centres on sustainable development, picking up various issues linked to development and climate change. In contrast to Topic 2, this topic includes some of the newer issues that have emerged in the international development agenda, such as sustainability, gender, education, work and the MDGs.",
"Topic 8 - Functional topic.",
"This topic appears to be comprised of functional or process-oriented words e.g. `problem', `solution', `effort', `general', etc.",
"Topic 9 - War.",
"This topic directly relates to issues of war. The key words appear to be linked to discussions around ongoing wars.",
"Topic 10 - Conflict in the Middle East.",
"This topic clearly picks up issues related to the Middle East – particularly around peace and conflict in the Middle East.",
"Topic 11 - Latin America.",
"This is another topic with a regional focus, picking up on issues related to Latin America.",
"Topic 12 - Commonwealth.",
"This is another of the less obvious topics to emerge from the STM in that the key words cover a wide range of issues. However, the places listed (e.g. Australia, Sri Lanka, Papua New Guinea) suggest the topic is related to the Commonwealth (or former British colonies).",
"Topic 13 - International security.",
"This topic broadly captures international security issues (e.g. terrorism, conflict, peace) and in particularly the international response to security threats, such as the deployment of peacekeepers.",
"Topic 14 - International law.",
"This topic picks up issues related to international law, particularly connected to territorial disputes.",
"Topic 15 - Decolonisation.",
"This topic relates more broadly to decolonisation. As well as specific mention of decolonisation, the key words include a range of issues and places linked to the decolonisation process.",
"Topic 16 - Cold War.",
"This is another of the less tightly defined topics. The topics appears to pick up issues that are broadly related to the Cold War. There is specific mention of the Soviet Union, and detente, as well as issues such as nuclear weapons, and the Helsinki Accords.",
"Based on these topics, we examine Topic 2 and Topic 7 as the principal “international development” topics. While a number of other topics – for example post-conflict development, Africa, Latin America, etc. – are related to development issues, Topic 2 and Topic 7 most directly capture aspects of international development. We consider these two topics more closely by contrasting the main words linked to these two topics. In Figure FIGREF6 , the word clouds show the 50 words most likely to mentioned in relation to each of the topics.",
"The word clouds provide further support for Topic 2 representing a more traditional view of international development focusing on economic processes. In addition to a strong emphasis on 'econom-', other key words, such as `trade', `debt', `market', `growth', `industri-', `financi-', `technolog-', `product', and `argicultur-', demonstrate the narrower economic focus on international development captured by Topic 2. In contrast, Topic 7 provides a much broader focus on development, with key words including `climat-', `sustain', `environ-', `educ-', `health', `women', `work', `mdgs', `peac-', `govern-', and `right'. Therefore, Topic 7 captures many of the issues that feature in the recent Sustainable Development Goals (SDGs) agenda BIBREF9 .",
"Figure FIGREF7 calculates the difference in probability of a word for the two topics, normalized by the maximum difference in probability of any word between the two topics. The figure demonstrates that while there is a much high probability of words, such as `econom-', `trade', and even `develop-' being used to discuss Topic 2; words such as `climat-', `govern-', `sustain', `goal', and `support' being used in association with Topic 7. This provides further support for the Topic 2 representing a more economistic view of international development, while Topic 7 relating to a broader sustainable development agenda.",
"We also assess the relationship between topics in the STM framework, which allows correlations between topics to be examined. This is shown in the network of topics in Figure FIGREF8 . The figure shows that Topic 2 and Topic 7 are closely related, which we would expect as they both deal with international development (and share key words on development, such as `develop-', `povert-', etc.). It is also worth noting that while Topic 2 is more closely correlated with the Latin America topic (Topic 11), Topic 7 is more directly correlated with the Africa topic (Topic 6)."
],
[
"We next look at the relationship between topic proportions and structural factors. The data for these structural covariates is taken from the World Bank's World Development Indicators (WDI) unless otherwise stated. Confidence intervals produced by the method of composition in STM allow us to pick up statistical uncertainty in the linear regression model.",
"Figure FIGREF9 demonstrates the effect of wealth (GDP per capita) on the the extent to which states discuss the two international development topics in their GD statements. The figure shows that the relationship between wealth and the topic proportions linked to international development differs across Topic 2 and Topic 7. Discussion of Topic 2 (economic development) remains far more constant across different levels of wealth than Topic 7. The poorest states tend to discuss both topics more than other developing nations. However, this effect is larger for Topic 7. There is a decline in the proportion of both topics as countries become wealthier until around $30,000 when there is an increase in discussion of Topic 7. There is a further pronounced increase in the extent countries discuss Topic 7 at around $60,000 per capita. However, there is a decline in expected topic proportions for both Topic 2 and Topic 7 for the very wealthiest countries.",
"Figure FIGREF10 shows the expected topic proportions for Topic 2 and Topic 7 associated with different population sizes. The figure shows a slight surge in the discussion of both development topics for countries with the very smallest populations. This reflects the significant amount of discussion of development issues, particularly sustainable development (Topic 7) by the small island developing states (SIDs). The discussion of Topic 2 remains relatively constant across different population sizes, with a slight increase in the expected topic proportion for the countries with the very largest populations. However, with Topic 7 there is an increase in expected topic proportion until countries have a population of around 300 million, after which there is a decline in discussion of Topic 7. For countries with populations larger than 500 million there is no effect of population on discussion of Topic 7. It is only with the very largest populations that we see a positive effect on discussion of Topic 7.",
"We would also expect the extent to which states discuss international development in their GD statements to be impacted by the amount of aid or official development assistance (ODA) they receive. Figure FIGREF11 plots the expected topic proportion according to the amount of ODA countries receive. Broadly-speaking the discussion of development topics remains largely constant across different levels of ODA received. There is, however, a slight increase in the expected topic proportions of Topic 7 according to the amount of ODA received. It is also worth noting the spikes in discussion of Topic 2 and Topic 7 for countries that receive negative levels of ODA. These are countries that are effectively repaying more in loans to lenders than they are receiving in ODA. These countries appear to raise development issues far more in their GD statements, which is perhaps not altogether surprising.",
"We also consider the effects of democracy on the expected topic proportions of both development topics using the Polity IV measure of democracy BIBREF10 . Figure FIGREF12 shows the extent to which states discuss the international development topics according to their level of democracy. Discussion of Topic 2 is fairly constant across different levels of democracy (although there are some slight fluctuations). However, the extent to which states discuss Topic 7 (sustainable development) varies considerably across different levels of democracy. Somewhat surprisingly the most autocratic states tend to discuss Topic 7 more than the slightly less autocratic states. This may be because highly autocratic governments choose to discuss development and environmental issues to avoid a focus on democracy and human rights. There is then an increase in the expected topic proportion for Topic 7 as levels of democracy increase reaching a peak at around 5 on the Polity scale, after this there is a gradual decline in discussion of Topic 7. This would suggest that democratizing or semi-democratic countries (which are more likely to be developing countries with democratic institutions) discuss sustainable development more than established democracies (that are more likely to be developed countries).",
"We also plot the results of the analysis as the difference in topic proportions for two different values of the effect of conflict. Our measure of whether a country is experiencing a civil conflict comes from the UCDP/PRIO Armed Conflict Dataset BIBREF11 . Point estimates and 95% confidence intervals are plotted in Figure FIGREF13 . The figure shows that conflict affects only Topic 7 and not Topic 2. Countries experiencing conflict are less likely to discuss Topic 7 (sustainable development) than countries not experiencing conflict. The most likely explanation is that these countries are more likely to devote a greater proportion of their annual statements to discussing issues around conflict and security than development. The fact that there is no effect of conflict on Topic 2 is interesting in this regard.",
"Finally, we consider regional effects in Figure FIGREF14 . We use the World Bank's classifications of regions: Latin America and the Caribbean (LCN), South Asia (SAS), Sub-Saharan Africa (SSA), Europe and Central Asia (ECS), Middle East and North Africa (MEA), East Asia and the Pacific (EAS), North America (NAC). The figure shows that states in South Asia, and Latin America and the Caribbean are likely to discuss Topic 2 the most. States in South Asia and East Asia and the Pacific discuss Topic 7 the most. The figure shows that countries in North America are likely to speak about Topic 7 least.",
"The analysis of discussion of international development in annual UN General Debate statements therefore uncovers two principle development topics: economic development and sustainable development. We find that discussion of Topic 2 is not significantly impacted by country-specific factors, such as wealth, population, democracy, levels of ODA, and conflict (although there are regional effects). However, we find that the extent to which countries discuss sustainable development (Topic 7) in their annual GD statements varies considerably according to these different structural factors. The results suggest that broadly-speaking we do not observe linear trends in the relationship between these country-specific factors and discussion of Topic 7. Instead, we find that there are significant fluctuations in the relationship between factors such as wealth, democracy, etc., and the extent to which these states discuss sustainable development in their GD statements. These relationships require further analysis and exploration."
],
[
"Despite decisions taken in international organisations having a huge impact on development initiatives and outcomes, we know relatively little about the agenda-setting process around the global governance of development. Using a novel approach that applies NLP methods to a new dataset of speeches in the UN General Debate, this paper has uncovered the main development topics discussed by governments in the UN, and the structural factors that influence the degree to which governments discuss international development. In doing so, the paper has shed some light on state preferences regarding the international development agenda in the UN. The paper more broadly demonstrates how text analytic approaches can help us to better understand different aspects of global governance."
]
]
} | {
"question": [
"What are the country-specific drivers of international development rhetoric?",
"Is the dataset multilingual?",
"How are the main international development topics that states raise identified?"
],
"question_id": [
"a2103e7fe613549a9db5e65008f33cf2ee0403bd",
"13b36644357870008d70e5601f394ec3c6c07048",
"e4a19b91b57c006a9086ae07f2d6d6471a8cf0ce"
],
"nlp_background": [
"five",
"five",
"five"
],
"topic_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"search_query": [
"",
"",
""
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"wealth ",
"democracy ",
"population",
"levels of ODA",
"conflict "
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Yet surprisingly little is known about the agenda-setting process for international development in global governance institutions. This is perhaps best demonstrated by the lack of information on how the different goals and targets of the MDGs were decided, which led to much criticism and concern about the global governance of development BIBREF1 . More generally, we know little about the types of development issues that different countries prioritise, or whether country-specific factors such as wealth or democracy make countries more likely to push for specific development issues to be put on the global political agenda.",
"The analysis of discussion of international development in annual UN General Debate statements therefore uncovers two principle development topics: economic development and sustainable development. We find that discussion of Topic 2 is not significantly impacted by country-specific factors, such as wealth, population, democracy, levels of ODA, and conflict (although there are regional effects). However, we find that the extent to which countries discuss sustainable development (Topic 7) in their annual GD statements varies considerably according to these different structural factors. The results suggest that broadly-speaking we do not observe linear trends in the relationship between these country-specific factors and discussion of Topic 7. Instead, we find that there are significant fluctuations in the relationship between factors such as wealth, democracy, etc., and the extent to which these states discuss sustainable development in their GD statements. These relationships require further analysis and exploration."
],
"highlighted_evidence": [
" More generally, we know little about the types of development issues that different countries prioritise, or whether country-specific factors such as wealth or democracy make countries more likely to push for specific development issues to be put on the global political agenda.",
" We find that discussion of Topic 2 is not significantly impacted by country-specific factors, such as wealth, population, democracy, levels of ODA, and conflict (although there are regional effects). "
]
}
],
"annotation_id": [
"9a8d3b251090979a6b4c6d04ed95386a881bbd1c"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"We use a new dataset of GD statements from 1970 to 2016, the UN General Debate Corpus (UNGDC), to examine the international development agenda in the UN BIBREF3 . Our application of NLP to these statements focuses in particular on structural topic models (STMs) BIBREF4 . The paper makes two contributions using this approach: (1) It sheds light on the main international development issues that governments prioritise in the UN; and (2) It identifies the key country-specific factors associated with governments discussing development issues in their GD statements."
],
"highlighted_evidence": [
"We use a new dataset of GD statements from 1970 to 2016, the UN General Debate Corpus (UNGDC), to examine the international development agenda in the UN BIBREF3 . "
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"We use a new dataset of GD statements from 1970 to 2016, the UN General Debate Corpus (UNGDC), to examine the international development agenda in the UN BIBREF3 . Our application of NLP to these statements focuses in particular on structural topic models (STMs) BIBREF4 . The paper makes two contributions using this approach: (1) It sheds light on the main international development issues that governments prioritise in the UN; and (2) It identifies the key country-specific factors associated with governments discussing development issues in their GD statements.",
"FLOAT SELECTED: Fig. 2. Topic quality. 20 highest probability words for the 16-topic model."
],
"highlighted_evidence": [
"We use a new dataset of GD statements from 1970 to 2016, the UN General Debate Corpus (UNGDC), to examine the international development agenda in the UN BIBREF3 . ",
"FLOAT SELECTED: Fig. 2. Topic quality. 20 highest probability words for the 16-topic model."
]
}
],
"annotation_id": [
"3976a227b981d398255fd5581bce0111300e6916",
"45b831b84ca84f2bd169ab070e005947b848d2e8"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": " They focus on exclusivity and semantic coherence measures: Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. They select select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence.",
"evidence": [
"We assess the optimal number of topics that need to be specified for the STM analysis. We follow the recommendations of the original STM paper and focus on exclusivity and semantic coherence measures. BIBREF5 propose semantic coherence measure, which is closely related to point-wise mutual information measure posited by BIBREF6 to evaluate topic quality. BIBREF5 show that semantic coherence corresponds to expert judgments and more general human judgments in Amazon's Mechanical Turk experiments.",
"Exclusivity scores for each topic follows BIBREF7 . Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. Cohesive and exclusive topics are more semantically useful. Following BIBREF8 we generate a set of candidate models ranging between 3 and 50 topics. We then plot the exclusivity and semantic coherence (numbers closer to 0 indicate higher coherence), with a linear regression overlaid (Figure FIGREF3 ). Models above the regression line have a “better” exclusivity-semantic coherence trade off. We select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence. The topic quality is usually evaluated by highest probability words, which is presented in Figure FIGREF4 ."
],
"highlighted_evidence": [
"We assess the optimal number of topics that need to be specified for the STM analysis. We follow the recommendations of the original STM paper and focus on exclusivity and semantic coherence measures.",
"Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. ",
"Following BIBREF8 we generate a set of candidate models ranging between 3 and 50 topics. We then plot the exclusivity and semantic coherence (numbers closer to 0 indicate higher coherence), with a linear regression overlaid (Figure FIGREF3 ). Models above the regression line have a “better” exclusivity-semantic coherence trade off. We select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence. The topic quality is usually evaluated by highest probability words, which is presented in Figure FIGREF4 ."
]
}
],
"annotation_id": [
"b5c02e8f62e47bd5c139f9741433bd8cec5ae9bb"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
} | {
"caption": [
"Fig. 1. Optimal model search. Semantic coherence and exclusivity results for a model search from 3 to 50 topics. Models above the regression line provide a better trade off. Largest positive residual is a 16-topic model.",
"Fig. 2. Topic quality. 20 highest probability words for the 16-topic model.",
"Fig. 3. Topic content. 50 highest probability words for the 2nd and 7th topics.",
"Fig. 4. Comparing Topics 2 and 7 quality. 50 highest probability words contrasted between Topics 2 and 7.",
"Fig. 5. Network of topics. Correlation of topics.",
"Fig. 6. Effect of wealth. Main effect and 95% confidence interval.",
"Fig. 7. Effect of population. Main effect and 95% confidence interval.",
"Fig. 9. Effect of democracy. Main effect and 95% confidence interval.",
"Fig. 8. Effect of ODA. Main effect and 95% confidence interval.",
"Fig. 10. Effect of conflict. Point estimates and 95% confidence intervals.",
"Fig. 11. Regional effects. Point estimates and 95% confidence intervals."
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png",
"4-Figure5-1.png",
"4-Figure6-1.png",
"5-Figure7-1.png",
"5-Figure9-1.png",
"5-Figure8-1.png",
"6-Figure10-1.png",
"6-Figure11-1.png"
]
} |
2003.08553 | QnAMaker: Data to Bot in 2 Minutes | Having a bot for seamless conversations is a much-desired feature that products and services today seek for their websites and mobile apps. These bots help reduce traffic received by human support significantly by handling frequent and directly answerable known questions. Many such services have huge reference documents such as FAQ pages, which makes it hard for users to browse through this data. A conversation layer over such raw data can lower traffic to human support by a great margin. We demonstrate QnAMaker, a service that creates a conversational layer over semi-structured data such as FAQ pages, product manuals, and support documents. QnAMaker is the popular choice for Extraction and Question-Answering as a service and is used by over 15,000 bots in production. It is also used by search interfaces and not just bots. | {
"section_name": [
"Introduction",
"System description ::: Architecture",
"System description ::: Bot Development Process",
"System description ::: Extraction",
"System description ::: Retrieval And Ranking",
"System description ::: Retrieval And Ranking ::: Pre-Processing",
"System description ::: Retrieval And Ranking ::: Features",
"System description ::: Retrieval And Ranking ::: Contextual Features",
"System description ::: Retrieval And Ranking ::: Modeling and Training",
"System description ::: Persona Based Chit-Chat",
"System description ::: Active Learning",
"Evaluation and Insights",
"Demonstration",
"Future Work"
],
"paragraphs": [
[
"QnAMaker aims to simplify the process of bot creation by extracting Question-Answer (QA) pairs from data given by users into a Knowledge Base (KB) and providing a conversational layer over it. KB here refers to one instance of azure search index, where the extracted QA are stored. Whenever a developer creates a KB using QnAMaker, they automatically get all NLP capabilities required to answer user's queries. There are other systems such as Google's Dialogflow, IBM's Watson Discovery which tries to solve this problem. QnAMaker provides unique features for the ease of development such as the ability to add a persona-based chit-chat layer on top of the bot. Additionally, bot developers get automatic feedback from the system based on end-user traffic and interaction which helps them in enriching the KB; we call this feature active-learning. Our system also allows user to add Multi-Turn structure to KB using hierarchical extraction and contextual ranking. QnAMaker today supports over 35 languages, and is the only system among its competitors to follow a Server-Client architecture; all the KB data rests only in the client's subscription, giving users total control over their data. QnAMaker is part of Microsoft Cognitive Service and currently runs using the Microsoft Azure Stack."
],
[
"As shown in Figure FIGREF4, humans can have two different kinds of roles in the system: Bot-Developers who want to create a bot using the data they have, and End-Users who will chat with the bot(s) created by bot-developers. The components involved in the process are:",
"QnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. This website is designed to ease the use of management APIs. It also provides a test pane.",
"QnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. It then passes these QA pairs to the web app to create the Knowledge Base Index.",
"Azure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.",
"QnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. WebApp does ranking on top of retrieved results. WebApp also handles feedback management for active learning.",
"Bot: Calls the WebApp with the User's query to get results."
],
[
"Creating a bot is a 3-step process for a bot developer:",
"Create a QnaMaker Resource in Azure: This creates a WebApp with binaries required to run QnAMaker. It also creates an Azure Search Service for populating the index with any given knowledge base, extracted from user data",
"Use Management APIs to Create/Update/Delete your KB: The Create API automatically extracts the QA pairs and sends the Content to WebApp, which indexes it in Azure Search Index. Developers can also add persona-based chat content and synonyms while creating and updating their KBs.",
"Bot Creation: Create a bot using any framework and call the WebApp hosted in Azure to get your queries answered. There are Bot-Framework templates provided for the same."
],
[
"The Extraction component is responsible for understanding a given document and extracting potential QA pairs. These QA pairs are in turn used to create a KB to be consumed later on by the QnAMaker WebApp to answer user queries. First, the basic blocks from given documents such as text, lines are extracted. Then the layout of the document such as columns, tables, lists, paragraphs, etc is extracted. This is done using Recursive X-Y cut BIBREF0. Following Layout Understanding, each element is tagged as headers, footers, table of content, index, watermark, table, image, table caption, image caption, heading, heading level, and answers. Agglomerative clustering BIBREF1 is used to identify heading and hierarchy to form an intent tree. Leaf nodes from the hierarchy are considered as QA pairs. In the end, the intent tree is further augmented with entities using CRF-based sequence labeling. Intents that are repeated in and across documents are further augmented with their parent intent, adding more context to resolve potential ambiguity."
],
[
"QnAMaker uses Azure Search Index as it's retrieval layer, followed by re-ranking on top of retrieved results (Figure FIGREF21). Azure Search is based on inverted indexing and TF-IDF scores. Azure Search provides fuzzy matching based on edit-distance, thus making retrieval robust to spelling mistakes. It also incorporates lemmatization and normalization. These indexes can scale up to millions of documents, lowering the burden on QnAMaker WebApp which gets less than 100 results to re-rank.",
"Different customers may use QnAMaker for different scenarios such as banking task completion, answering FAQs on company policies, or fun and engagement. The number of QAs, length of questions and answers, number of alternate questions per QA can vary significantly across different types of content. Thus, the ranker model needs to use features that are generic enough to be relevant across all use cases."
],
[
"The pre-processing layer uses components such as Language Detection, Lemmatization, Speller, and Word Breaker to normalize user queries. It also removes junk characters and stop-words from the user's query."
],
[
"Going into granular features and the exact empirical formulas used is out of the scope of this paper. The broad level features used while ranking are:",
"WordNet: There are various features generated using WordNet BIBREF2 matching with questions and answers. This takes care of word-level semantics. For instance, if there is information about “price of furniture\" in a KB and the end-user asks about “price of table\", the user will likely get a relevant answer. The scores of these WordNet features are calculated as a function of:",
"Distance of 2 words in the WordNet graph",
"Distance of Lowest Common Hypernym from the root",
"Knowledge-Base word importance (Local IDFs)",
"Global word importance (Global IDFs)",
"This is the most important feature in our model as it has the highest relative feature gain.",
"CDSSM: Convolutional Deep Structured Semantic Models BIBREF3 are used for sentence-level semantic matching. This is a dual encoder model that converts text strings (sentences, queries, predicates, entity mentions, etc) into their vector representations. These models are trained using millions of Bing Query Title Click-Through data. Using the source-model for vectorizing user query and target-model for vectorizing answer, we compute the cosine similarity between these two vectors, giving the relevance of answer corresponding to the query.",
"TF-IDF: Though sentence-to-vector models are trained on huge datasets, they fail to effectively disambiguate KB specific data. This is where a standard TF-IDF BIBREF4 featurizer with local and global IDFs helps."
],
[
"We extend the features for contextual ranking by modifying the candidate QAs and user query in these ways:",
"$Query_{modified}$ = Query + Previous Answer; For instance, if user query is “yes\" and the previous answer is “do you want to know about XYZ\", the current query becomes “do you want to know about XYZ yes\".",
"Candidate QnA pairs are appended with its parent Questions and Answers; no contextual information is used from the user's query. For instance, if a candidate QnA has a question “benefits\" and its parent question was “know about XYZ\", the candidate QA's question is changed to “know about XYZ benefits\".",
"The features mentioned in Section SECREF20 are calculated for the above combinations also. These features carry contextual information."
],
[
"We use gradient-boosted decision trees as our ranking model to combine all the features. Early stopping BIBREF5 based on Generality-to-Progress ratio is used to decide the number of step trees and Tolerant Pruning BIBREF6 helps prevent overfitting. We follow incremental training if there is small changes in features or training data so that the score distribution is not changed drastically."
],
[
"We add support for bot-developers to directly enable handling chit-chat queries like “hi\", “thank you\", “what's up\" in their QnAMaker bots. In addition to chit-chat, we also give bot developers the flexibility to ground responses for such queries in a specific personality: professional, witty, friendly, caring, or enthusiastic. For example, the “Humorous\" personality can be used for a casual bot, whereas a “Professional\" personality is more suited in case of banking FAQs or task-completion bots. There is a list of 100+ predefined intents BIBREF7. There is a curated list of queries for each of these intents, along with a separate query understanding layer for ranking these intents. The arbitration between chit-chat answers and user's knowledge base answers is handled by using a chat-domain classifier BIBREF8."
],
[
"The majority of the KBs are created using existing FAQ pages or manuals but to improve the quality it requires effort from the developers. Active learning generates suggestions based on end-user feedback as well as ranker's implicit signals. For instance, if for a query, CDSSM feature was confident that one QnA should be ranked higher whereas wordnet feature thought other QnA should be ranked higher, active learning system will try to disambiguate it by showing this as a suggestion to the bot developer. To avoid showing similar suggestions to developers, DB-Scan clustering is done which optimizes the number of suggestions shown."
],
[
"QnAMaker is not domain-specific and can be used for any type of data. To support this claim, we measure our system's performance for datasets across various domains. The evaluations are done by managed judges who understands the knowledge base and then judge user queries relevance to the QA pairs (binary labels). Each query-QA pair is judged by two judges. We filter out data for which judges do not agree on the label. Chit-chat in itself can be considered as a domain. Thus, we evaluate performance on given KB both with and without chit-chat data (last two rows in Table TABREF19), as well as performance on just chit-chat data (2nd row in Table TABREF19). Hybrid of deep learning(CDSSM) and machine learning features give our ranking model low computation cost, high explainability and significant F1/AUC score. Based on QnAMaker usage, we observed these trends:",
"Around 27% of the knowledge bases created use pre-built persona-based chitchat, out of which, $\\sim $4% of the knowledge bases are created for chit-chat alone. The highest used personality is Professional which is used in 9% knowledge bases.",
"Around $\\sim $25% developers have enabled active learning suggestions. The acceptance to reject ratio for active learning suggestions is 0.31.",
"25.5% of the knowledge bases use one URL as a source while creation. $\\sim $41% of the knowledge bases created use different sources like multiple URLs. 15.19% of the knowledge bases use both URL and editorial content as sources. Rest use just editorial content."
],
[
"We demonstrate QnAMaker: a service to add a conversational layer over semi-structured user data. In addition to query-answering, we support novel features like personality-grounded chit-chat, active learning based on user-interaction feedback (Figure FIGREF40), and hierarchical extraction for multi-turn conversations (Figure FIGREF41). The goal of the demonstration will be to show how easy it is to create an intelligent bot using QnAMaker. All the demonstrations will be done on the production website Demo Video can be seen here."
],
[
"The system currently doesn't highlight the answer span and does not generate answers taking the KB as grounding. We will be soon supporting Answer Span BIBREF9 and KB-grounded response generation BIBREF10 in QnAMaker. We are also working on user-defined personas for chit-chat (automatically learned from user-documents). We aim to enhance our extraction to be able to work for any unstructured document as well as images. We are also experimenting on improving our ranking system by using semantic vector-based search as our retrieval and transformer-based models for re-ranking."
]
]
} | {
"question": [
"What experiments do the authors present to validate their system?",
"How does the conversation layer work?",
"What components is the QnAMaker composed of?"
],
"question_id": [
"fd0ef5a7b6f62d07776bf672579a99c67e61a568",
"071bcb4b054215054f17db64bfd21f17fd9e1a80",
"f399d5a8dbeec777a858f81dc4dd33a83ba341a2"
],
"nlp_background": [
"five",
"five",
"five"
],
"topic_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"search_query": [
"",
"",
""
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" we measure our system's performance for datasets across various domains",
"evaluations are done by managed judges who understands the knowledge base and then judge user queries relevance to the QA pairs"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"QnAMaker is not domain-specific and can be used for any type of data. To support this claim, we measure our system's performance for datasets across various domains. The evaluations are done by managed judges who understands the knowledge base and then judge user queries relevance to the QA pairs (binary labels). Each query-QA pair is judged by two judges. We filter out data for which judges do not agree on the label. Chit-chat in itself can be considered as a domain. Thus, we evaluate performance on given KB both with and without chit-chat data (last two rows in Table TABREF19), as well as performance on just chit-chat data (2nd row in Table TABREF19). Hybrid of deep learning(CDSSM) and machine learning features give our ranking model low computation cost, high explainability and significant F1/AUC score. Based on QnAMaker usage, we observed these trends:"
],
"highlighted_evidence": [
" To support this claim, we measure our system's performance for datasets across various domains. The evaluations are done by managed judges who understands the knowledge base and then judge user queries relevance to the QA pairs (binary labels). Each query-QA pair is judged by two judges. We filter out data for which judges do not agree on the label. Chit-chat in itself can be considered as a domain. Thus, we evaluate performance on given KB both with and without chit-chat data (last two rows in Table TABREF19), as well as performance on just chit-chat data (2nd row in Table TABREF19)."
]
}
],
"annotation_id": [
"c6aac397b3bf27942363d5b4be00bf094654d366"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"3c069b65ef0117a5d5c4ee9ac49ab6709cfbe124"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"QnAMaker Portal",
"QnaMaker Management APIs",
"Azure Search Index",
"QnaMaker WebApp",
"Bot"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"System description ::: Architecture",
"As shown in Figure FIGREF4, humans can have two different kinds of roles in the system: Bot-Developers who want to create a bot using the data they have, and End-Users who will chat with the bot(s) created by bot-developers. The components involved in the process are:",
"QnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. This website is designed to ease the use of management APIs. It also provides a test pane.",
"QnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. It then passes these QA pairs to the web app to create the Knowledge Base Index.",
"Azure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.",
"QnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. WebApp does ranking on top of retrieved results. WebApp also handles feedback management for active learning.",
"Bot: Calls the WebApp with the User's query to get results."
],
"highlighted_evidence": [
"System description ::: Architecture",
"The components involved in the process are:",
"QnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. ",
"QnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. ",
"Azure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.",
"QnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. ",
"Bot: Calls the WebApp with the User's query to get results."
]
},
{
"unanswerable": false,
"extractive_spans": [
"QnAMaker Portal",
"QnaMaker Management APIs",
"Azure Search Index",
"QnaMaker WebApp",
"Bot"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"As shown in Figure FIGREF4, humans can have two different kinds of roles in the system: Bot-Developers who want to create a bot using the data they have, and End-Users who will chat with the bot(s) created by bot-developers. The components involved in the process are:",
"QnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. This website is designed to ease the use of management APIs. It also provides a test pane.",
"QnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. It then passes these QA pairs to the web app to create the Knowledge Base Index.",
"Azure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.",
"QnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. WebApp does ranking on top of retrieved results. WebApp also handles feedback management for active learning.",
"Bot: Calls the WebApp with the User's query to get results."
],
"highlighted_evidence": [
"The components involved in the process are:\n\nQnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. This website is designed to ease the use of management APIs. It also provides a test pane.\n\nQnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. It then passes these QA pairs to the web app to create the Knowledge Base Index.\n\nAzure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.\n\nQnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. WebApp does ranking on top of retrieved results. WebApp also handles feedback management for active learning.\n\nBot: Calls the WebApp with the User's query to get results."
]
}
],
"annotation_id": [
"443426bf61950f89af016a359cbdb0f5f3680d81",
"cc3663b4c97c95bfda1e9a6d64172abea619da01"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Figure 1: Interactions between various components of QnaMaker, along with their scopes: server-side and client-side",
"Table 1: Retrieval And Ranking Measurements",
"Figure 2: QnAMaker Runtime Pipeline",
"Figure 3: Active Learning Suggestions",
"Figure 4: Multi-Turn Knowledge Base"
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"4-Figure4-1.png"
]
} |
1909.09491 | A simple discriminative training method for machine translation with large-scale features | Margin infused relaxed algorithms (MIRAs) dominate model tuning in statistical machine translation in the case of large scale features, but also they are famous for the complexity in implementation. We introduce a new method, which regards an N-best list as a permutation and minimizes the Plackett-Luce loss of ground-truth permutations. Experiments with large-scale features demonstrate that, the new method is more robust than MERT; though it is only matchable with MIRAs, it has a comparatively advantage, easier to implement. | {
"section_name": [
"Introduction",
"Plackett-Luce Model",
"Plackett-Luce Model in Statistical Machine Translation",
"Plackett-Luce Model in Statistical Machine Translation ::: N-best Hypotheses Resample",
"Evaluation",
"Evaluation ::: Plackett-Luce Model for SMT Tuning",
"Evaluation ::: Plackett-Luce Model for SMT Reranking"
],
"paragraphs": [
[
"Since Och BIBREF0 proposed minimum error rate training (MERT) to exactly optimize objective evaluation measures, MERT has become a standard model tuning technique in statistical machine translation (SMT). Though MERT performs better by improving its searching algorithm BIBREF1, BIBREF2, BIBREF3, BIBREF4, it does not work reasonably when there are lots of features. As a result, margin infused relaxed algorithms (MIRA) dominate in this case BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10.",
"In SMT, MIRAs consider margin losses related to sentence-level BLEUs. However, since the BLEU is not decomposable into each sentence, these MIRA algorithms use some heuristics to compute the exact losses, e.g., pseudo-document BIBREF8, and document-level loss BIBREF9.",
"Recently, another successful work in large-scale feature tuning include force decoding basedBIBREF11, classification based BIBREF12.",
"We aim to provide a simpler tuning method for large-scale features than MIRAs. Out motivation derives from an observation on MERT. As MERT considers the quality of only top1 hypothesis set, there might have more-than-one set of parameters, which have similar top1 performances in tuning, but have very different topN hypotheses. Empirically, we expect an ideal model to benefit the total N-best list. That is, better hypotheses should be assigned with higher ranks, and this might decrease the error risk of top1 result on unseen data.",
"PlackettBIBREF13 offered an easy-to-understand theory of modeling a permutation. An N-best list is assumedly generated by sampling without replacement. The $i$th hypothesis to sample relies on those ranked after it, instead of on the whole list. This model also supports a partial permutation which accounts for top $k$ positions in a list, regardless of the remaining. When taking $k$ as 1, this model reduces to a standard conditional probabilistic training, whose dual problem is actual the maximum entropy based BIBREF14. Although Och BIBREF0 substituted direct error optimization for a maximum entropy based training, probabilistic models correlate with BLEU well when features are rich enough. The similar claim also appears in BIBREF15. This also make the new method be applicable in large-scale features."
],
[
"Plackett-Luce was firstly proposed to predict ranks of horses in gambling BIBREF13. Let $\\mathbf {r}=(r_{1},r_{2}\\ldots r_{N})$ be $N$ horses with a probability distribution $\\mathcal {P}$ on their abilities to win a game, and a rank $\\mathbf {\\pi }=(\\pi (1),\\pi (2)\\ldots \\pi (|\\mathbf {\\pi }|))$ of horses can be understood as a generative procedure, where $\\pi (j)$ denotes the index of the horse in the $j$th position.",
"In the 1st position, there are $N$ horses as candidates, each of which $r_{j}$ has a probability $p(r_{j})$ to be selected. Regarding the rank $\\pi $, the probability of generating the champion is $p(r_{\\pi (1)})$. Then the horse $r_{\\pi (1)}$ is removed from the candidate pool.",
"In the 2nd position, there are only $N-1$ horses, and their probabilities to be selected become $p(r_{j})/Z_{2}$, where $Z_{2}=1-p(r_{\\pi (1)})$ is the normalization. Then the runner-up in the rank $\\pi $, the $\\pi (2)$th horse, is chosen at the probability $p(r_{\\pi (2)})/Z_{2}$. We use a consistent terminology $Z_{1}$ in selecting the champion, though $Z_{1}$ equals 1 trivially.",
"This procedure iterates to the last rank in $\\pi $. The key idea for the Plackett-Luce model is the choice in the $i$th position in a rank $\\mathbf {\\pi }$ only depends on the candidates not chosen at previous stages. The probability of generating a rank $\\pi $ is given as follows",
"where $Z_{j}=1-\\sum _{t=1}^{j-1}p(r_{\\pi (t)})$.",
"We offer a toy example (Table TABREF3) to demonstrate this procedure.",
"Theorem 1 The permutation probabilities $p(\\mathbf {\\pi })$ form a probability distribution over a set of permutations $\\Omega _{\\pi }$. For example, for each $\\mathbf {\\pi }\\in \\Omega _{\\pi }$, we have $p(\\mathbf {\\pi })>0$, and $\\sum _{\\pi \\in \\Omega _{\\pi }}p(\\mathbf {\\pi })=1$.",
"We have to note that, $\\Omega _{\\pi }$ is not necessarily required to be completely ranked permutations in theory and in practice, since gamblers might be interested in only the champion and runner-up, and thus $|\\mathbf {\\pi }|\\le N$. In experiments, we would examine the effects on different length of permutations, systems being termed $PL(|\\pi |)$.",
"Theorem 2 Given any two permutations $\\mathbf {\\pi }$ and $\\mathbf {\\pi }\\prime $, and they are different only in two positions $p$ and $q$, $p<q$, with $\\pi (p)=\\mathbf {\\pi }\\prime (q)$ and $\\pi (q)=\\mathbf {\\pi }\\prime (p)$. If $p(\\pi (p))>p(\\pi (q))$, then $p(\\pi )>p(\\pi \\prime )$.",
"In other words, exchanging two positions in a permutation where the horse more likely to win is not ranked before the other would lead to an increase of the permutation probability.",
"This suggests the ground-truth permutation, ranked decreasingly by their probabilities, owns the maximum permutation probability on a given distribution. In SMT, we are motivated to optimize parameters to maximize the likelihood of ground-truth permutation of an N-best hypotheses.",
"Due to the limitation of space, see BIBREF13, BIBREF16 for the proofs of the theorems."
],
[
"In SMT, let $\\mathbf {f}=(f_{1},f_{2}\\ldots )$ denote source sentences, and $\\mathbf {e}=(\\lbrace e_{1,1},\\ldots \\rbrace ,\\lbrace e_{2,1},\\ldots \\rbrace \\ldots )$ denote target hypotheses. A set of features are defined on both source and target side. We refer to $h(e_{i,*})$ as a feature vector of a hypothesis from the $i$th source sentence, and its score from a ranking function is defined as the inner product $h(e_{i,*})^{T}w$ of the weight vector $w$ and the feature vector.",
"We first follow the popular exponential style to define a parameterized probability distribution over a list of hypotheses.",
"The ground-truth permutation of an $n$best list is simply obtained after ranking by their sentence-level BLEUs. Here we only concentrate on their relative ranks which are straightforward to compute in practice, e.g. add 1 smoothing. Let $\\pi _{i}^{*}$ be the ground-truth permutation of hypotheses from the $i$th source sentences, and our optimization objective is maximizing the log-likelihood of the ground-truth permutations and penalized using a zero-mean and unit-variance Gaussian prior. This results in the following objective and gradient:",
"where $Z_{i,j}$ is defined as the $Z_{j}$ in Formula (1) of the $i$th source sentence.",
"The log-likelihood function is smooth, differentiable, and concave with the weight vector $w$, and its local maximal solution is also a global maximum. Iteratively selecting one parameter in $\\alpha $ for tuning in a line search style (or MERT style) could also converge into the global global maximum BIBREF17. In practice, we use more fast limited-memory BFGS (L-BFGS) algorithm BIBREF18."
],
[
"The log-likelihood of a Plackett-Luce model is not a strict upper bound of the BLEU score, however, it correlates with BLEU well in the case of rich features. The concept of “rich” is actually qualitative, and obscure to define in different applications. We empirically provide a formula to measure the richness in the scenario of machine translation.",
"The greater, the richer. In practice, we find a rough threshold of r is 5.",
"In engineering, the size of an N-best list with unique hypotheses is usually less than several thousands. This suggests that, if features are up to thousands or more, the Plackett-Luce model is quite suitable here. Otherwise, we could reduce the size of N-best lists by sampling to make $r$ beyond the threshold.",
"Their may be other efficient sampling methods, and here we adopt a simple one. If we want to $m$ samples from a list of hypotheses $\\mathbf {e}$, first, the $\\frac{m}{3}$ best hypotheses and the $\\frac{m}{3}$ worst hypotheses are taken by their sentence-level BLEUs. Second, we sample the remaining hypotheses on distribution $p(e_{i})\\propto \\exp (h(e_{i})^{T}w)$, where $\\mathbf {w}$ is an initial weight from last iteration."
],
[
"We compare our method with MERT and MIRA in two tasks, iterative training, and N-best list rerank. We do not list PRO BIBREF12 as our baseline, as Cherry et al.BIBREF10 have compared PRO with MIRA and MERT massively.",
"In the first task, we align the FBIS data (about 230K sentence pairs) with GIZA++, and train a 4-gram language model on the Xinhua portion of Gigaword corpus. A hierarchical phrase-based (HPB) model (Chiang, 2007) is tuned on NIST MT 2002, and tested on MT 2004 and 2005. All features are eight basic ones BIBREF20 and extra 220 group features. We design such feature templates to group grammars by the length of source side and target side, (feat-type,a$\\le $src-side$\\le $b,c$\\le $tgt-side$\\le $d), where the feat-type denotes any of the relative frequency, reversed relative frequency, lexical probability and reversed lexical probability, and [a, b], [c, d] enumerate all possible subranges of [1, 10], as the maximum length on both sides of a hierarchical grammar is limited to 10. There are 4 $\\times $ 55 extra group features.",
"In the second task, we rerank an N-best list from a HPB system with 7491 features from a third party. The system uses six million parallel sentence pairs available to the DARPA BOLT Chinese-English task. This system includes 51 dense features (translation probabilities, provenance features, etc.) and up to 7440 sparse features (mostly lexical and fertility-based). The language model is a 6-gram model trained on a 10 billion words, including the English side of our parallel corpora plus other corpora such as Gigaword (LDC2011T07) and Google News. For the tuning and test sets, we use 1275 and 1239 sentences respectively from the LDC2010E30 corpus."
],
[
"We conduct a full training of machine translation models. By default, a decoder is invoked for at most 40 times, and each time it outputs 200 hypotheses to be combined with those from previous iterations and sent into tuning algorithms.",
"In getting the ground-truth permutations, there are many ties with the same sentence-level BLEU, and we just take one randomly. In this section, all systems have only around two hundred features, hence in Plackett-Luce based training, we sample 30 hypotheses in an accumulative $n$best list in each round of training.",
"All results are shown in Table TABREF10, we can see that all PL($k$) systems does not perform well as MERT or MIRA in the development data, this maybe due to that PL($k$) systems do not optimize BLEU and the features here are relatively not enough compared to the size of N-best lists (empirical Formula DISPLAY_FORM9). However, PL($k$) systems are better than MERT in testing. PL($k$) systems consider the quality of hypotheses from the 2th to the $k$th, which is guessed to act the role of the margin like SVM in classification . Interestingly, MIRA wins first in training, and still performs quite well in testing.",
"The PL(1) system is equivalent to a max-entropy based algorithm BIBREF14 whose dual problem is actually maximizing the conditional probability of one oracle hypothesis. When we increase the $k$, the performances improve at first. After reaching a maximum around $k=5$, they decrease slowly. We explain this phenomenon as this, when features are rich enough, higher BLEU scores could be easily fitted, then longer ground-truth permutations include more useful information."
],
[
"After being de-duplicated, the N-best list has an average size of around 300, and with 7491 features. Refer to Formula DISPLAY_FORM9, this is ideal to use the Plackett-Luce model. Results are shown in Figure FIGREF12. We observe some interesting phenomena.",
"First, the Plackett-Luce models boost the training BLEU very greatly, even up to 2.5 points higher than MIRA. This verifies our assumption, richer features benefit BLEU, though they are optimized towards a different objective.",
"Second, the over-fitting problem of the Plackett-Luce models PL($k$) is alleviated with moderately large $k$. In PL(1), the over-fitting is quite obvious, the portion in which the curve overpasses MIRA is the smallest compared to other $k$, and its convergent performance is below the baseline. When $k$ is not smaller than 5, the curves are almost above the MIRA line. After 500 L-BFGS iterations, their performances are no less than the baseline, though only by a small margin.",
"This experiment displays, in large-scale features, the Plackett-Luce model correlates with BLEU score very well, and alleviates overfitting in some degree."
]
]
} | {
"question": [
"How they measure robustness in experiments?",
"Is new method inferior in terms of robustness to MIRAs in experiments?",
"What experiments with large-scale features are performed?"
],
"question_id": [
"d28260b5565d9246831e8dbe594d4f6211b60237",
"8670989ca39214eda6c1d1d272457a3f3a92818b",
"923b12c0a50b0ee22237929559fad0903a098b7b"
],
"nlp_background": [
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"We empirically provide a formula to measure the richness in the scenario of machine translation."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The log-likelihood of a Plackett-Luce model is not a strict upper bound of the BLEU score, however, it correlates with BLEU well in the case of rich features. The concept of “rich” is actually qualitative, and obscure to define in different applications. We empirically provide a formula to measure the richness in the scenario of machine translation.",
"The greater, the richer. In practice, we find a rough threshold of r is 5."
],
"highlighted_evidence": [
"The log-likelihood of a Plackett-Luce model is not a strict upper bound of the BLEU score, however, it correlates with BLEU well in the case of rich features. The concept of “rich” is actually qualitative, and obscure to define in different applications. We empirically provide a formula to measure the richness in the scenario of machine translation.",
"The greater, the richer. In practice, we find a rough threshold of r is 5"
]
},
{
"unanswerable": false,
"extractive_spans": [
"boost the training BLEU very greatly",
"the over-fitting problem of the Plackett-Luce models PL($k$) is alleviated with moderately large $k$"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"First, the Plackett-Luce models boost the training BLEU very greatly, even up to 2.5 points higher than MIRA. This verifies our assumption, richer features benefit BLEU, though they are optimized towards a different objective.",
"Second, the over-fitting problem of the Plackett-Luce models PL($k$) is alleviated with moderately large $k$. In PL(1), the over-fitting is quite obvious, the portion in which the curve overpasses MIRA is the smallest compared to other $k$, and its convergent performance is below the baseline. When $k$ is not smaller than 5, the curves are almost above the MIRA line. After 500 L-BFGS iterations, their performances are no less than the baseline, though only by a small margin."
],
"highlighted_evidence": [
"First, the Plackett-Luce models boost the training BLEU very greatly, even up to 2.5 points higher than MIRA. This verifies our assumption, richer features benefit BLEU, though they are optimized towards a different objective.\n\nSecond, the over-fitting problem of the Plackett-Luce models PL($k$) is alleviated with moderately large $k$. In PL(1), the over-fitting is quite obvious, the portion in which the curve overpasses MIRA is the smallest compared to other $k$, and its convergent performance is below the baseline. When $k$ is not smaller than 5, the curves are almost above the MIRA line."
]
}
],
"annotation_id": [
"8408c034789c854514cebd1a01819cafc3ffee55",
"9b2644f3909be4ec61d48c8644297775e139f448"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"c60226e79eec043a0ddb74ae86e428bf6037b38d"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Plackett-Luce Model for SMT Reranking"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Evaluation ::: Plackett-Luce Model for SMT Reranking",
"After being de-duplicated, the N-best list has an average size of around 300, and with 7491 features. Refer to Formula DISPLAY_FORM9, this is ideal to use the Plackett-Luce model. Results are shown in Figure FIGREF12. We observe some interesting phenomena.",
"This experiment displays, in large-scale features, the Plackett-Luce model correlates with BLEU score very well, and alleviates overfitting in some degree."
],
"highlighted_evidence": [
" Plackett-Luce Model for SMT Reranking\nAfter being de-duplicated, the N-best list has an average size of around 300, and with 7491 features.",
"This experiment displays, in large-scale features, the Plackett-Luce model correlates with BLEU score very well, and alleviates overfitting in some degree."
]
}
],
"annotation_id": [
"3e2e4494d3cb470aa9c8301507e6f8db5dcf44ab"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Table 2: PL(k): Plackett-Luce model optimizing the ground-truth permutation with length k. The significant symbols (+ at 0.05 level) are compared with MERT. The bold font numbers signifies better results compared to M(1) system.",
"Figure 1: PL(k) with 500 L-BFGS iterations, k=1,3,5,7,9,12,15 compared with MIRA in reranking."
],
"file": [
"4-Table2-1.png",
"5-Figure1-1.png"
]
} |
2001.05284 | Improving Spoken Language Understanding By Exploiting ASR N-best Hypotheses | In a modern spoken language understanding (SLU) system, the natural language understanding (NLU) module takes interpretations of a speech from the automatic speech recognition (ASR) module as the input. The NLU module usually uses the first best interpretation of a given speech in downstream tasks such as domain and intent classification. However, the ASR module might misrecognize some speeches and the first best interpretation could be erroneous and noisy. Solely relying on the first best interpretation could make the performance of downstream tasks non-optimal. To address this issue, we introduce a series of simple yet efficient models for improving the understanding of semantics of the input speeches by collectively exploiting the n-best speech interpretations from the ASR module. | {
"section_name": [
"Introduction",
"Baseline, Oracle and Direct Models ::: Baseline and Oracle",
"Baseline, Oracle and Direct Models ::: Direct Models",
"Integration of N-BEST Hypotheses",
"Integration of N-BEST Hypotheses ::: Hypothesized Text Concatenation",
"Integration of N-BEST Hypotheses ::: Hypothesis Embedding Concatenation",
"Experiment ::: Dataset",
"Experiment ::: Performance on Entire Test Set",
"Experiment ::: Performance Comparison among Various Subsets",
"Experiment ::: Improvements on Different Domains and Different Numbers of Hypotheses",
"Experiment ::: Intent Classification",
"Conclusions and Future Work",
"Acknowledgements"
],
"paragraphs": [
[
"Currently, voice-controlled smart devices are widely used in multiple areas to fulfill various tasks, e.g. playing music, acquiring weather information and booking tickets. The SLU system employs several modules to enable the understanding of the semantics of the input speeches. When there is an incoming speech, the ASR module picks it up and attempts to transcribe the speech. An ASR model could generate multiple interpretations for most speeches, which can be ranked by their associated confidence scores. Among the $n$-best hypotheses, the top-1 hypothesis is usually transformed to the NLU module for downstream tasks such as domain classification, intent classification and named entity recognition (slot tagging). Multi-domain NLU modules are usually designed hierarchically BIBREF0. For one incoming utterance, NLU modules will firstly classify the utterance as one of many possible domains and the further analysis on intent classification and slot tagging will be domain-specific.",
"In spite of impressive development on the current SLU pipeline, the interpretation of speech could still contain errors. Sometimes the top-1 recognition hypothesis of ASR module is ungrammatical or implausible and far from the ground-truth transcription BIBREF1, BIBREF2. Among those cases, we find one interpretation exact matching with or more similar to transcription can be included in the remaining hypotheses ($2^{nd}- n^{th}$).",
"To illustrate the value of the $2^{nd}- n^{th}$ hypotheses, we count the frequency of exact matching and more similar (smaller edit distance compared to the 1st hypothesis) to transcription for different positions of the $n$-best hypotheses list. Table TABREF1 exhibits the results. For the explored dataset, we only collect the top 5 interpretations for each utterance ($n = 5$). Notably, when the correct recognition exists among the 5 best hypotheses, 50% of the time (sum of the first row's percentages) it occurs among the $2^{nd}-5^{th}$ positions. Moreover, as shown by the second row in Table TABREF1, compared to the top recognition hypothesis, the other hypotheses can sometimes be more similar to the transcription.",
"Over the past few years, we have observed the success of reranking the $n$-best hypotheses BIBREF1, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10 before feeding the best interpretation to the NLU module. These approaches propose the reranking framework by involving morphological, lexical or syntactic features BIBREF8, BIBREF9, BIBREF10, speech recognition features like confidence score BIBREF1, BIBREF4, and other features like number of tokens, rank position BIBREF1. They are effective to select the best from the hypotheses list and reduce the word error rate (WER) BIBREF11 of speech recognition.",
"Those reranking models could benefit the first two cases in Table TABREF2 when there is an utterance matching with transcription. However, in other cases like the third row, it is hard to integrate the fragmented information in multiple hypotheses.",
"This paper proposes various methods integrating $n$-best hypotheses to tackle the problem. To the best of our knowledge, this is the first study that attempts to collectively exploit the $n$-best speech interpretations in the SLU system. This paper serves as the basis of our $n$-best-hypotheses-based SLU system, focusing on the methods of integration for the hypotheses. Since further improvements of the integration framework require considerable setup and descriptions, where jointly optimized tasks (e.g. transcription reconstruction) trained with multiple ways (multitask BIBREF12, multistage learning BIBREF13) and more features (confidence score, rank position, etc.) are involved, we leave those to a subsequent article.",
"This paper is organized as follows. Section SECREF2 introduces the Baseline, Oracle and Direct models. Section SECREF3 describes proposed ways to integrate $n$-best hypotheses during training. The experimental setup and results are described in Section SECREF4. Section SECREF5 contains conclusions and future work."
],
[
"The preliminary architecture is shown in Fig. FIGREF4. For a given transcribed utterance, it is firstly encoded with Byte Pair Encoding (BPE) BIBREF14, a compression algorithm splitting words to fundamental subword units (pairs of bytes or BPs) and reducing the embedded vocabulary size. Then we use a BiLSTM BIBREF15 encoder and the output state of the BiLSTM is regarded as a vector representation for this utterance. Finally, a fully connected Feed-forward Neural Network (FNN) followed by a softmax layer, labeled as a multilayer perceptron (MLP) module, is used to perform the domain/intent classification task based on the vector.",
"For convenience, we simplify the whole process in Fig.FIGREF4 as a mapping $BM$ (Baseline Mapping) from the input utterance $S$ to an estimated tag's probability $p(\\tilde{t})$, where $p(\\tilde{t}) \\leftarrow BM(S)$. The $Baseline$ is trained on transcription and evaluated on ASR 1st best hypothesis ($S=\\text{ASR}\\ 1^{st}\\ \\text{best})$. The $Oracle$ is trained on transcription and evaluated on transcription ($S = \\text{Transcription}$). We name it Oracle simply because we assume that hypotheses are noisy versions of transcription."
],
[
"Besides the Baseline and Oracle, where only ASR 1-best hypothesis is considered, we also perform experiments to utilize ASR $n$-best hypotheses during evaluation. The models evaluating with $n$-bests and a BM (pre-trained on transcription) are called Direct Models (in Fig. FIGREF7):",
"Majority Vote. We apply the BM model on each hypothesis independently and combine the predictions by picking the majority predicted label, i.e. Music.",
"",
"Sort by Score. After parallel evaluation on all hypotheses, sort the prediction by the corresponding confidence score and choose the one with the highest score, i.e. Video.",
"",
"Rerank (Oracle). Since the current rerank models (e.g., BIBREF1, BIBREF3, BIBREF4) attempt to select the hypothesis most similar to transcription, we propose the Rerank (Oracle), which picks the hypothesis with the smallest edit distance to transcription (assume it is the $a$-th best) during evaluation and uses its corresponding prediction."
],
[
"All the above mentioned models apply the BM trained on one interpretation (transcription). Their abilities to take advantage of multiple interpretations are actually not trained. As a further step, we propose multiple ways to integrate the $n$-best hypotheses during training. The explored methods can be divided into two groups as shown in Fig. FIGREF11. Let $H_1, H_2,..., H_n $ denote all the hypotheses from ASR and $bp_{H_k, i} \\in BPs$ denotes the $i$-th pair of bytes (BP) in the $k^{th}$ best hypothesis. The model parameters associated with the two possible ways both contain: embedding $e_{bp}$ for pairs of bytes, BiLSTM parameters $\\theta $ and MLP parameters $W, b$."
],
[
"The basic integration method (Combined Sentence) concatenates the $n$-best hypothesized text. We separate hypotheses with a special delimiter ($<$SEP$>$). We assume BPE totally produces $m$ BPs (delimiters are not split during encoding). Suppose the $n^{th}$ hypothesis has $j$ pairs. The entire model can be formulated as:",
"In Eqn. DISPLAY_FORM13, the connected hypotheses and separators are encoded via BiLSTM to a sequence of hidden state vectors. Each hidden state vector, e.g. $h_1$, is the concatenation of forward $h_{1f}$ and backward $h_{1b}$ states. The concatenation of the last state of the forward and backward LSTM forms the output vector of BiLSTM (concatenation denoted as $[,]$). Then, in Eqn. DISPLAY_FORM14, the MLP module defines the probability of a specific tag (domain or intent) $\\tilde{t}$ as the normalized activation ($\\sigma $) output after linear transformation of the output vector."
],
[
"The concatenation of hypothesized text leverages the $n$-best list by transferring information among hypotheses in an embedding framework, BiLSTM. However, since all the layers have access to both the preceding and subsequent information, the embedding among $n$-bests will influence each other, which confuses the embedding and makes the whole framework sensitive to the noise in hypotheses.",
"As the second group of integration approaches, we develop models, PoolingAvg/Max, on the concatenation of hypothesis embedding, which isolate the embedding process among hypotheses and summarize the features by a pooling layer. For each hypothesis (e.g., $i^{th}$ best in Eqn. DISPLAY_FORM16 with $j$ pairs of bytes), we could get a sequence of hidden states from BiLSTM and obtain its final output state by concatenating the first and last hidden state ($h_{output_i}$ in Eqn. DISPLAY_FORM17). Then, we stack all the output states vertically as shown in Eqn. SECREF15. Note that in the real data, we will not always have a fixed size of hypotheses list. For a list with $r$ ($<n$) interpretations, we get the embedding for each of them and pad with the embedding of the first best hypothesis until a fixed size $n$. When $r\\ge n$, we only stack the top $n$ embeddings. We employ $h_{output_1}$ for padding to enhance the influence of the top 1 hypothesis, which is more reliable. Finally, one unified representation could be achieved via Pooling (Max/Avg pooling with $n$ by 1 sliding window and stride 1) on the concatenation and one score could be produced per possible tag for the given task."
],
[
"We conduct our experiments on $\\sim $ 8.7M annotated anonymised user utterances. They are annotated and derived from requests across 23 domains."
],
[
"Table TABREF24 shows the relative error reduction (RErr) of Baseline, Oracle and our proposed models on the entire test set ($\\sim $ 300K utterances) for multi-class domain classification. We can see among all the direct methods, predicting based on the hypothesis most similar to the transcription (Rerank (Oracle)) is the best.",
"As for the other models attempting to integrate the $n$-bests during training, PoolingAvg gets the highest relative improvement, 14.29%. It as well turns out that all the integration methods outperform direct models drastically. This shows that having access to $n$-best hypotheses during training is crucial for the quality of the predicted semantics."
],
[
"To further detect the reason for improvements, we split the test set into two parts based on whether ASR first best agrees with transcription and evaluate separately. Comparing Table TABREF26 and Table TABREF27, obviously the benefits of using multiple hypotheses are mainly gained when ASR 1st best disagrees with the transcription. When ASR 1st best agrees with transcription, the proposed integration models can also keep the performance. Under that condition, we can still improve a little (3.56%) because, by introducing multiple ASR hypotheses, we could have more information and when the transcription/ASR 1st best does not appear in the training set's transcriptions, its $n$-bests list may have similar hypotheses included in the training set's $n$-bests. Then, our integration model trained on $n$-best hypotheses as well has clue to predict. The series of comparisons reveal that our approaches integrating the hypotheses are robust to the ASR errors and whenever the ASR model makes mistakes, we can outperform more significantly."
],
[
"Among all the 23 domains, we choose 8 popular domains for further comparisons between the Baseline and the best model of Table TABREF24, PoolingAvg. Fig. FIGREF29 exhibits the results. We could find the PoolingAvg consistently improves the accuracy for all 8 domains.",
"In the previous experiments, the number of utilized hypotheses for each utterance during evaluation is five, which means we use the top 5 interpretations when the size of ASR recognition list is not smaller than 5 and use all the interpretations otherwise. Changing the number of hypotheses while evaluation, Fig. FIGREF30 shows a monotonic increase with the access to more hypotheses for the PoolingAvg and PoolingMax (Sort by Score is shown because it is the best achievable direct model while the Rerank (Oracle) is not realistic). The growth becomes gentle after four hypotheses are leveraged."
],
[
"Since another downstream task, intent classification, is similar to domain classification, we just show the best model in domain classification, PoolingAvg, on domain-specific intent classification for three popular domains due to space limit. As Table TABREF32 shows, the margins of using multiple hypotheses with PoolingAvg are significant as well."
],
[
"This paper improves the SLU system robustness to ASR errors by integrating $n$-best hypotheses in different ways, e.g. the aggregation of predictions from hypotheses or the concatenation of hypothesis text or embedding. We can achieve significant classification accuracy improvements over production-quality baselines on domain and intent classifications, 14% to 25% relative gains. The improvement is more significant for a subset of testing data where ASR first best is different from transcription. We also observe that with more hypotheses utilized, the performance can be further improved. In the future, we aim to employ additional features (e.g. confidence scores for hypotheses or tokens) to integrate $n$-bests more efficiently, where we can train a function $f$ to obtain a weight for each hypothesis embedding before pooling. Another direction is using deep learning framework to embed the word lattice BIBREF16 or confusion network BIBREF17, BIBREF18, which can provide a compact representation of multiple hypotheses and more information like times, in the SLU system."
],
[
"We would like to thank Junghoo (John) Cho for proofreading."
]
]
} | {
"question": [
"Which ASR system(s) is used in this work?",
"What are the series of simple models?",
"Over which datasets/corpora is this work evaluated?"
],
"question_id": [
"67131c15aceeb51ae1d3b2b8241c8750a19cca8e",
"579a0603ec56fc2b4aa8566810041dbb0cd7b5e7",
"c9c85eee41556c6993f40e428fa607af4abe80a9"
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Oracle "
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The preliminary architecture is shown in Fig. FIGREF4. For a given transcribed utterance, it is firstly encoded with Byte Pair Encoding (BPE) BIBREF14, a compression algorithm splitting words to fundamental subword units (pairs of bytes or BPs) and reducing the embedded vocabulary size. Then we use a BiLSTM BIBREF15 encoder and the output state of the BiLSTM is regarded as a vector representation for this utterance. Finally, a fully connected Feed-forward Neural Network (FNN) followed by a softmax layer, labeled as a multilayer perceptron (MLP) module, is used to perform the domain/intent classification task based on the vector.",
"For convenience, we simplify the whole process in Fig.FIGREF4 as a mapping $BM$ (Baseline Mapping) from the input utterance $S$ to an estimated tag's probability $p(\\tilde{t})$, where $p(\\tilde{t}) \\leftarrow BM(S)$. The $Baseline$ is trained on transcription and evaluated on ASR 1st best hypothesis ($S=\\text{ASR}\\ 1^{st}\\ \\text{best})$. The $Oracle$ is trained on transcription and evaluated on transcription ($S = \\text{Transcription}$). We name it Oracle simply because we assume that hypotheses are noisy versions of transcription."
],
"highlighted_evidence": [
"For a given transcribed utterance, it is firstly encoded with Byte Pair Encoding (BPE) BIBREF14, a compression algorithm splitting words to fundamental subword units (pairs of bytes or BPs) and reducing the embedded vocabulary size. Then we use a BiLSTM BIBREF15 encoder and the output state of the BiLSTM is regarded as a vector representation for this utterance. Finally, a fully connected Feed-forward Neural Network (FNN) followed by a softmax layer, labeled as a multilayer perceptron (MLP) module, is used to perform the domain/intent classification task based on the vector.",
"We name it Oracle simply because we assume that hypotheses are noisy versions of transcription."
]
}
],
"annotation_id": [
"cc4f5dc6fadb450c42c98b7dce31fde7fc51561c"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"perform experiments to utilize ASR $n$-best hypotheses during evaluation"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Besides the Baseline and Oracle, where only ASR 1-best hypothesis is considered, we also perform experiments to utilize ASR $n$-best hypotheses during evaluation. The models evaluating with $n$-bests and a BM (pre-trained on transcription) are called Direct Models (in Fig. FIGREF7):"
],
"highlighted_evidence": [
"Besides the Baseline and Oracle, where only ASR 1-best hypothesis is considered, we also perform experiments to utilize ASR $n$-best hypotheses during evaluation."
]
}
],
"annotation_id": [
"84b162ceba940564b61ba3742fd9e46969b8acf3"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"$\\sim $ 8.7M annotated anonymised user utterances"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We conduct our experiments on $\\sim $ 8.7M annotated anonymised user utterances. They are annotated and derived from requests across 23 domains."
],
"highlighted_evidence": [
"We conduct our experiments on $\\sim $ 8.7M annotated anonymised user utterances. They are annotated and derived from requests across 23 domains."
]
},
{
"unanswerable": false,
"extractive_spans": [
"on $\\sim $ 8.7M annotated anonymised user utterances"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We conduct our experiments on $\\sim $ 8.7M annotated anonymised user utterances. They are annotated and derived from requests across 23 domains."
],
"highlighted_evidence": [
"We conduct our experiments on $\\sim $ 8.7M annotated anonymised user utterances. They are annotated and derived from requests across 23 domains."
]
}
],
"annotation_id": [
"3e6d548d2e8f4585bf072f42364eeba556063af6",
"81e7ce7d6fcafeea5ba157233dc3e6d047030034"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
]
} | {
"caption": [
"Fig. 3: Integration of n-best hypotheses with two possible ways: 1) concatenate hypothesized text and 2) concatenate hypothesis embedding.",
"Table 3: Micro and Macro F1 score for multi-class domain classification.",
"Table 4: Performance comparison for the subset (∼ 19%) where ASR first best disagrees with transcription.",
"Table 5: Performance comparison for the subset (∼ 81%) where ASR first best agrees with transcription.",
"Fig. 5: The influence of different amount of hypotheses.",
"Table 6: Intent classification for three important domains.",
"Fig. 4: Improvements on important domains."
],
"file": [
"3-Figure3-1.png",
"3-Table3-1.png",
"3-Table4-1.png",
"4-Table5-1.png",
"4-Figure5-1.png",
"4-Table6-1.png",
"4-Figure4-1.png"
]
} |
1909.12140 | DisSim: A Discourse-Aware Syntactic Text Simplification Frameworkfor English and German | We introduce DisSim, a discourse-aware sentence splitting framework for English and German whose goal is to transform syntactically complex sentences into an intermediate representation that presents a simple and more regular structure which is easier to process for downstream semantic applications. For this purpose, we turn input sentences into a two-layered semantic hierarchy in the form of core facts and accompanying contexts, while identifying the rhetorical relations that hold between them. In that way, we preserve the coherence structure of the input and, hence, its interpretability for downstream tasks. | {
"section_name": [
"Introduction",
"System Description",
"System Description ::: Split into Minimal Propositions",
"System Description ::: Establish a Semantic Hierarchy",
"System Description ::: Establish a Semantic Hierarchy ::: Constituency Type Classification.",
"System Description ::: Establish a Semantic Hierarchy ::: Rhetorical Relation Identification.",
"Usage",
"Experiments",
"Application in Downstream Tasks",
"Conclusion"
],
"paragraphs": [
[
"We developed a syntactic text simplification (TS) approach that can be used as a preprocessing step to facilitate and improve the performance of a wide range of artificial intelligence (AI) tasks, such as Machine Translation, Information Extraction (IE) or Text Summarization. Since shorter sentences are generally better processed by natural language processing (NLP) systems BIBREF0, the goal of our approach is to break down a complex source sentence into a set of minimal propositions, i.e. a sequence of sound, self-contained utterances, with each of them presenting a minimal semantic unit that cannot be further decomposed into meaningful propositions BIBREF1.",
"However, any sound and coherent text is not simply a loose arrangement of self-contained units, but rather a logical structure of utterances that are semantically connected BIBREF2. Consequently, when carrying out syntactic simplification operations without considering discourse implications, the rewriting may easily result in a disconnected sequence of simplified sentences that lack important contextual information, making the text harder to interpret. Thus, in order to preserve the coherence structure and, hence, the interpretability of the input, we developed a discourse-aware TS approach based on Rhetorical Structure Theory (RST) BIBREF3. It establishes a contextual hierarchy between the split components, and identifies and classifies the semantic relationship that holds between them. In that way, a complex source sentence is turned into a so-called discourse tree, consisting of a set of hierarchically ordered and semantically interconnected sentences that present a simplified syntax which is easier to process for downstream semantic applications and may support a faster generalization in machine learning tasks."
],
[
"We present DisSim, a discourse-aware sentence splitting approach for English and German that creates a semantic hierarchy of simplified sentences. It takes a sentence as input and performs a recursive transformation process that is based upon a small set of 35 hand-crafted grammar rules for the English version and 29 rules for the German approach. These patterns were heuristically determined in a comprehensive linguistic analysis and encode syntactic and lexical features that can be derived from a sentence's parse tree. Each rule specifies (1) how to split up and rephrase the input into structurally simplified sentences and (2) how to set up a semantic hierarchy between them. They are recursively applied on a given source sentence in a top-down fashion. When no more rule matches, the algorithm stops and returns the generated discourse tree."
],
[
"In a first step, source sentences that present a complex linguistic form are turned into clean, compact structures by decomposing clausal and phrasal components. For this purpose, the transformation rules encode both the splitting points and rephrasing procedure for reconstructing proper sentences."
],
[
"Each split will create two or more sentences with a simplified syntax. To establish a semantic hierarchy between them, two subtasks are carried out:"
],
[
"First, we set up a contextual hierarchy between the split sentences by connecting them with information about their hierarchical level, similar to the concept of nuclearity in RST. For this purpose, we distinguish core sentences (nuclei), which carry the key information of the input, from accompanying contextual sentences (satellites) that disclose additional information about it. To differentiate between those two types of constituents, the transformation patterns encode a simple syntax-based approach where subordinate clauses/phrases are classified as context sentences, while superordinate as well as coordinate clauses/phrases are labelled as core."
],
[
"Second, we aim to restore the semantic relationship between the disembedded components. For this purpose, we identify and classify the rhetorical relations that hold between the simplified sentences, making use of both syntactic features, which are derived from the input's parse tree structure, and lexical features in the form of cue phrases. Following the work of Taboada13, they are mapped to a predefined list of rhetorical cue words to infer the type of rhetorical relation."
],
[
"DisSim can be either used as a Java API, imported as a Maven dependency, or as a service which we provide through a command line interface or a REST-like web service that can be deployed via docker. It takes as input NL text in the form of a single sentence. Alternatively, a file containing a sequence of sentences can be loaded. The result of the transformation process is either written to the console or stored in a specified output file in JSON format. We also provide a browser-based user interface, where the user can directly type in sentences to be processed (see Figure FIGREF1)."
],
[
"For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains in order to assess the performance of our framework with regard to the sentence splitting subtask. The results show that our proposed sentence splitting approach outperforms the state of the art in structural TS, returning fine-grained simplified sentences that achieve a high level of grammaticality and preserve the meaning of the input. The full evaluation methodology and detailed results are reported in niklaus-etal-2019-transforming. In addition, a comparative analysis with the annotations contained in the RST Discourse Treebank BIBREF6 demonstrates that we are able to capture the contextual hierarchy between the split sentences with a precision of almost 90% and reach an average precision of approximately 70% for the classification of the rhetorical relations that hold between them. The evaluation of the German version is in progress."
],
[
"An extrinsic evaluation was carried out on the task of Open IE BIBREF7. It revealed that when applying DisSim as a preprocessing step, the performance of state-of-the-art Open IE systems can be improved by up to 346% in precision and 52% in recall, i.e. leading to a lower information loss and a higher accuracy of the extracted relations. For details, the interested reader may refer to niklaus-etal-2019-transforming.",
"Moreover, most current Open IE approaches output only a loose arrangement of extracted tuples that are hard to interpret as they ignore the context under which a proposition is complete and correct and thus lack the expressiveness needed for a proper interpretation of complex assertions BIBREF8. As illustrated in Figure FIGREF9, with the help of the semantic hierarchy generated by our discourse-aware sentence splitting approach the output of Open IE systems can be easily enriched with contextual information that allows to restore the semantic relationship between a set of propositions and, hence, preserve their interpretability in downstream tasks."
],
[
"We developed and implemented a discourse-aware syntactic TS approach that recursively splits and rephrases complex English or German sentences into a semantic hierarchy of simplified sentences. The resulting lightweight semantic representation can be used to facilitate and improve a variety of AI tasks."
]
]
} | {
"question": [
"Is the semantic hierarchy representation used for any task?",
"What are the corpora used for the task?",
"Is the model evaluated?"
],
"question_id": [
"f8281eb49be3e8ea0af735ad3bec955a5dedf5b3",
"a5ee9b40a90a6deb154803bef0c71c2628acb571",
"e286860c41a4f704a3a08e45183cb8b14fa2ad2f"
],
"nlp_background": [
"two",
"two",
"two"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"German",
"German",
"German"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Yes, Open IE",
"evidence": [
"An extrinsic evaluation was carried out on the task of Open IE BIBREF7. It revealed that when applying DisSim as a preprocessing step, the performance of state-of-the-art Open IE systems can be improved by up to 346% in precision and 52% in recall, i.e. leading to a lower information loss and a higher accuracy of the extracted relations. For details, the interested reader may refer to niklaus-etal-2019-transforming."
],
"highlighted_evidence": [
"An extrinsic evaluation was carried out on the task of Open IE BIBREF7."
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"Moreover, most current Open IE approaches output only a loose arrangement of extracted tuples that are hard to interpret as they ignore the context under which a proposition is complete and correct and thus lack the expressiveness needed for a proper interpretation of complex assertions BIBREF8. As illustrated in Figure FIGREF9, with the help of the semantic hierarchy generated by our discourse-aware sentence splitting approach the output of Open IE systems can be easily enriched with contextual information that allows to restore the semantic relationship between a set of propositions and, hence, preserve their interpretability in downstream tasks."
],
"highlighted_evidence": [
"As illustrated in Figure FIGREF9, with the help of the semantic hierarchy generated by our discourse-aware sentence splitting approach the output of Open IE systems can be easily enriched with contextual information that allows to restore the semantic relationship between a set of propositions and, hence, preserve their interpretability in downstream tasks."
]
}
],
"annotation_id": [
"4083f879cdc02cfa51c88a45ce16e30707a8a63e",
"d12ac9d62a47d355ba1fdd0799c58e59877d5eb8"
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains",
"The evaluation of the German version is in progress."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains in order to assess the performance of our framework with regard to the sentence splitting subtask. The results show that our proposed sentence splitting approach outperforms the state of the art in structural TS, returning fine-grained simplified sentences that achieve a high level of grammaticality and preserve the meaning of the input. The full evaluation methodology and detailed results are reported in niklaus-etal-2019-transforming. In addition, a comparative analysis with the annotations contained in the RST Discourse Treebank BIBREF6 demonstrates that we are able to capture the contextual hierarchy between the split sentences with a precision of almost 90% and reach an average precision of approximately 70% for the classification of the rhetorical relations that hold between them. The evaluation of the German version is in progress."
],
"highlighted_evidence": [
"For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains ",
"For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains ",
"The evaluation of the German version is in progress."
]
}
],
"annotation_id": [
"c3e99448c2420d3cb04bd3efce32a638d0e62a31"
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "the English version is evaluated. The German version evaluation is in progress ",
"evidence": [
"For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains in order to assess the performance of our framework with regard to the sentence splitting subtask. The results show that our proposed sentence splitting approach outperforms the state of the art in structural TS, returning fine-grained simplified sentences that achieve a high level of grammaticality and preserve the meaning of the input. The full evaluation methodology and detailed results are reported in niklaus-etal-2019-transforming. In addition, a comparative analysis with the annotations contained in the RST Discourse Treebank BIBREF6 demonstrates that we are able to capture the contextual hierarchy between the split sentences with a precision of almost 90% and reach an average precision of approximately 70% for the classification of the rhetorical relations that hold between them. The evaluation of the German version is in progress."
],
"highlighted_evidence": [
"For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains in order to assess the performance of our framework with regard to the sentence splitting subtask. The results show that our proposed sentence splitting approach outperforms the state of the art in structural TS, returning fine-grained simplified sentences that achieve a high level of grammaticality and preserve the meaning of the input. The full evaluation methodology and detailed results are reported in niklaus-etal-2019-transforming. In addition, a comparative analysis with the annotations contained in the RST Discourse Treebank BIBREF6 demonstrates that we are able to capture the contextual hierarchy between the split sentences with a precision of almost 90% and reach an average precision of approximately 70% for the classification of the rhetorical relations that hold between them. The evaluation of the German version is in progress."
]
}
],
"annotation_id": [
"f819d17832ad50d4b30bba15edae222e7cf068c1"
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
}
]
} | {
"caption": [
"Figure 1: DISSIM’s browser-based user interface. The simplified output is displayed in the form of a directed graph where the split sentences are connected by arrows whose labels denote the semantic relationship that holds between a pair of simplified sentences and whose direction indicates their contextual hierarchy. The colors signal different context layers. In that way, a semantic hierarchy of minimal, self-contained propositions is established.",
"Figure 2: Comparison of the propositions extracted by Supervised-OIE (Stanovsky et al., 2018) with (5-11) and without (1-4) using our discourse-aware TS approach as a preprocessing step."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png"
]
} |
1709.00947 | Learning Word Embeddings from the Portuguese Twitter Stream: A Study of some Practical Aspects | This paper describes a preliminary study for producing and distributing a large-scale database of embeddings from the Portuguese Twitter stream. We start by experimenting with a relatively small sample and focusing on three challenges: volume of training data, vocabulary size and intrinsic evaluation metrics. Using a single GPU, we were able to scale up vocabulary size from 2048 words embedded and 500K training examples to 32768 words over 10M training examples while keeping a stable validation loss and approximately linear trend on training time per epoch. We also observed that using less than 50\% of the available training examples for each vocabulary size might result in overfitting. Results on intrinsic evaluation show promising performance for a vocabulary size of 32768 words. Nevertheless, intrinsic evaluation metrics suffer from over-sensitivity to their corresponding cosine similarity thresholds, indicating that a wider range of metrics need to be developed to track progress. | {
"section_name": [
"Introduction",
"Related Work",
"Our Neural Word Embedding Model",
"Experimental Setup",
"Training Data",
"Metrics related with the Learning Process",
"Tests and Gold-Standard Data for Intrinsic Evaluation",
"Results and Analysis",
"Intrinsic Evaluation",
"Further Analysis regarding Evaluation Metrics",
"Conclusions"
],
"paragraphs": [
[
"Word embeddings have great practical importance since they can be used as pre-computed high-density features to ML models, significantly reducing the amount of training data required in a variety of NLP tasks. However, there are several inter-related challenges with computing and consistently distributing word embeddings concerning the:",
"Not only the space of possibilities for each of these aspects is large, there are also challenges in performing a consistent large-scale evaluation of the resulting embeddings BIBREF0 . This makes systematic experimentation of alternative word-embedding configurations extremely difficult.",
"In this work, we make progress in trying to find good combinations of some of the previous parameters. We focus specifically in the task of computing word embeddings for processing the Portuguese Twitter stream. User-generated content (such as twitter messages) tends to be populated by words that are specific to the medium, and that are constantly being added by users. These dynamics pose challenges to NLP systems, which have difficulties in dealing with out of vocabulary words. Therefore, learning a semantic representation for those words directly from the user-generated stream - and as the words arise - would allow us to keep up with the dynamics of the medium and reduce the cases for which we have no information about the words.",
"Starting from our own implementation of a neural word embedding model, which should be seen as a flexible baseline model for further experimentation, our research tries to answer the following practical questions:",
"By answering these questions based on a reasonably small sample of Twitter data (5M), we hope to find the best way to proceed and train embeddings for Twitter vocabulary using the much larger amount of Twitter data available (300M), but for which parameter experimentation would be unfeasible. This work can thus be seen as a preparatory study for a subsequent attempt to produce and distribute a large-scale database of embeddings for processing Portuguese Twitter data."
],
[
"There are several approaches to generating word embeddings. One can build models that explicitly aim at generating word embeddings, such as Word2Vec or GloVe BIBREF1 , BIBREF2 , or one can extract such embeddings as by-products of more general models, which implicitly compute such word embeddings in the process of solving other language tasks.",
"Word embeddings methods aim to represent words as real valued continuous vectors in a much lower dimensional space when compared to traditional bag-of-words models. Moreover, this low dimensional space is able to capture lexical and semantic properties of words. Co-occurrence statistics are the fundamental information that allows creating such representations. Two approaches exist for building word embeddings. One creates a low rank approximation of the word co-occurrence matrix, such as in the case of Latent Semantic Analysis BIBREF3 and GloVe BIBREF2 . The other approach consists in extracting internal representations from neural network models of text BIBREF4 , BIBREF5 , BIBREF1 . Levy and Goldberg BIBREF6 showed that the two approaches are closely related.",
"Although, word embeddings research go back several decades, it was the recent developments of Deep Learning and the word2vec framework BIBREF1 that captured the attention of the NLP community. Moreover, Mikolov et al. BIBREF7 showed that embeddings trained using word2vec models (CBOW and Skip-gram) exhibit linear structure, allowing analogy questions of the form “man:woman::king:??.” and can boost performance of several text classification tasks.",
"One of the issues of recent work in training word embeddings is the variability of experimental setups reported. For instance, in the paper describing GloVe BIBREF2 authors trained their model on five corpora of different sizes and built a vocabulary of 400K most frequent words. Mikolov et al. BIBREF7 trained with 82K vocabulary while Mikolov et al. BIBREF1 was trained with 3M vocabulary. Recently, Arora et al. BIBREF8 proposed a generative model for learning embeddings that tries to explain some theoretical justification for nonlinear models (e.g. word2vec and GloVe) and some hyper parameter choices. Authors evaluated their model using 68K vocabulary.",
"SemEval 2016-Task 4: Sentiment Analysis in Twitter organizers report that participants either used general purpose pre-trained word embeddings, or trained from Tweet 2016 dataset or “from some sort of dataset” BIBREF9 . However, participants neither report the size of vocabulary used neither the possible effect it might have on the task specific results.",
"Recently, Rodrigues et al. BIBREF10 created and distributed the first general purpose embeddings for Portuguese. Word2vec gensim implementation was used and authors report results with different values for the parameters of the framework. Furthermore, authors used experts to translate well established word embeddings test sets for Portuguese language, which they also made publicly available and we use some of those in this work."
],
[
"The neural word embedding model we use in our experiments is heavily inspired in the one described in BIBREF4 , but ours is one layer deeper and is set to solve a slightly different word prediction task. Given a sequence of 5 words - INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 , the task the model tries to perform is that of predicting the middle word, INLINEFORM5 , based on the two words on the left - INLINEFORM6 INLINEFORM7 - and the two words on the right - INLINEFORM8 INLINEFORM9 : INLINEFORM10 . This should produce embeddings that closely capture distributional similarity, so that words that belong to the same semantic class, or which are synonyms and antonyms of each other, will be embedded in “close” regions of the embedding hyper-space.",
"Our neural model is composed of the following layers:",
"All neural activations in the model are sigmoid functions. The model was implemented using the Syntagma library which relies on Keras BIBREF11 for model development, and we train the model using the built-in ADAM BIBREF12 optimizer with the default parameters."
],
[
"We are interested in assessing two aspects of the word embedding process. On one hand, we wish to evaluate the semantic quality of the produced embeddings. On the other, we want to quantify how much computational power and training data are required to train the embedding model as a function of the size of the vocabulary INLINEFORM0 we try to embed. These aspects have fundamental practical importance for deciding how we should attempt to produce the large-scale database of embeddings we will provide in the future. All resources developed in this work are publicly available.",
"Apart from the size of the vocabulary to be processed ( INLINEFORM0 ), the hyperparamaters of the model that we could potentially explore are i) the dimensionality of the input word embeddings and ii) the dimensionality of the output word embeddings. As mentioned before, we set both to 64 bits after performing some quick manual experimentation. Full hyperparameter exploration is left for future work.",
"Our experimental testbed comprises a desktop with a nvidia TITAN X (Pascal), Intel Core Quad i7 3770K 3.5Ghz, 32 GB DDR3 RAM and a 180GB SSD drive."
],
[
"We randomly sampled 5M tweets from a corpus of 300M tweets collected from the Portuguese Twitter community BIBREF13 . The 5M comprise a total of 61.4M words (approx. 12 words per tweets in average). From those 5M tweets we generated a database containing 18.9M distinct 5-grams, along with their frequency counts. In this process, all text was down-cased. To help anonymizing the n-gram information, we substituted all the twitter handles by an artificial token “T_HANDLE\". We also substituted all HTTP links by the token “LINK\". We prepended two special tokens to complete the 5-grams generated from the first two words of the tweet, and we correspondingly appended two other special tokens to complete 5-grams centered around the two last tokens of the tweet.",
"Tokenization was perform by trivially separating tokens by blank space. No linguistic pre-processing, such as for example separating punctuation from words, was made. We opted for not doing any pre-processing for not introducing any linguistic bias from another tool (tokenization of user generated content is not a trivial problem). The most direct consequence of not performing any linguistic pre-processing is that of increasing the vocabulary size and diluting token counts. However, in principle, and given enough data, the embedding model should be able to learn the correct embeddings for both actual words (e.g. “ronaldo\") and the words that have punctuation attached (e.g. “ronaldo!\"). In practice, we believe that this can actually be an advantage for the downstream consumers of the embeddings, since they can also relax the requirements of their own tokenization stage. Overall, the dictionary thus produced contains approximately 1.3M distinct entries. Our dictionary was sorted by frequency, so the words with lowest index correspond to the most common words in the corpus.",
"We used the information from the 5-gram database to generate all training data used in the experiments. For a fixed size INLINEFORM0 of the target vocabulary to be embedded (e.g. INLINEFORM1 = 2048), we scanned the database to obtain all possible 5-grams for which all tokens were among the top INLINEFORM2 words of the dictionary (i.e. the top INLINEFORM3 most frequent words in the corpus). Depending on INLINEFORM4 , different numbers of valid training 5-grams were found in the database: the larger INLINEFORM5 the more valid 5-grams would pass the filter. The number of examples collected for each of the values of INLINEFORM6 is shown in Table TABREF16 .",
"Since one of the goals of our experiments is to understand the impact of using different amounts of training data, for each size of vocabulary to be embedded INLINEFORM0 we will run experiments training the models using 25%, 50%, 75% and 100% of the data available."
],
[
"We tracked metrics related to the learning process itself, as a function of the vocabulary size to be embedded INLINEFORM0 and of the fraction of training data used (25%, 50%, 75% and 100%). For all possible configurations, we recorded the values of the training and validation loss (cross entropy) after each epoch. Tracking these metrics serves as a minimalistic sanity check: if the model is not able to solve the word prediction task with some degree of success (e.g. if we observe no substantial decay in the losses) then one should not expect the embeddings to capture any of the distributional information they are supposed to capture."
],
[
"Using the gold standard data (described below), we performed three types of tests:",
"Class Membership Tests: embeddings corresponding two member of the same semantic class (e.g. “Months of the Year\", “Portuguese Cities\", “Smileys\") should be close, since they are supposed to be found in mostly the same contexts.",
"Class Distinction Test: this is the reciprocal of the previous Class Membership test. Embeddings of elements of different classes should be different, since words of different classes ere expected to be found in significantly different contexts.",
"Word Equivalence Test: embeddings corresponding to synonyms, antonyms, abbreviations (e.g. “porque\" abbreviated by “pq\") and partial references (e.g. “slb and benfica\") should be almost equal, since both alternatives are supposed to be used be interchangeable in all contexts (either maintaining or inverting the meaning).",
"Therefore, in our tests, two words are considered:",
"distinct if the cosine of the corresponding embeddings is lower than 0.70 (or 0.80).",
"to belong to the same class if the cosine of their embeddings is higher than 0.70 (or 0.80).",
"equivalent if the cosine of the embeddings is higher that 0.85 (or 0.95).",
"We report results using different thresholds of cosine similarity as we noticed that cosine similarity is skewed to higher values in the embedding space, as observed in related work BIBREF14 , BIBREF15 . We used the following sources of data for testing Class Membership:",
"AP+Battig data. This data was collected from the evaluation data provided by BIBREF10 . These correspond to 29 semantic classes.",
"Twitter-Class - collected manually by the authors by checking top most frequent words in the dictionary and then expanding the classes. These include the following 6 sets (number of elements in brackets): smileys (13), months (12), countries (6), names (19), surnames (14) Portuguese cities (9).",
"For the Class Distinction test, we pair each element of each of the gold standard classes, with all the other elements from other classes (removing duplicate pairs since ordering does not matter), and we generate pairs of words which are supposed belong to different classes. For Word Equivalence test, we manually collected equivalente pairs, focusing on abbreviations that are popular in Twitters (e.g. “qt\" INLINEFORM0 “quanto\" or “lx\" INLINEFORM1 “lisboa\" and on frequent acronyms (e.g. “slb\" INLINEFORM2 “benfica\"). In total, we compiled 48 equivalence pairs.",
"For all these tests we computed a coverage metric. Our embeddings do not necessarily contain information for all the words contained in each of these tests. So, for all tests, we compute a coverage metric that measures the fraction of the gold-standard pairs that could actually be tested using the different embeddings produced. Then, for all the test pairs actually covered, we obtain the success metrics for each of the 3 tests by computing the ratio of pairs we were able to correctly classified as i) being distinct (cosine INLINEFORM0 0.7 or 0.8), ii) belonging to the same class (cosine INLINEFORM1 0.7 or 0.8), and iii) being equivalent (cosine INLINEFORM2 0.85 or 0.95).",
"It is worth making a final comment about the gold standard data. Although we do not expect this gold standard data to be sufficient for a wide-spectrum evaluation of the resulting embeddings, it should be enough for providing us clues regarding areas where the embedding process is capturing enough semantics, and where it is not. These should still provide valuable indications for planning how to produce the much larger database of word embeddings."
],
[
"We run the training process and performed the corresponding evaluation for 12 combinations of size of vocabulary to be embedded, and the volume of training data available that has been used. Table TABREF27 presents some overall statistics after training for 40 epochs.",
"The average time per epoch increases first with the size of the vocabulary to embed INLINEFORM0 (because the model will have more parameters), and then, for each INLINEFORM1 , with the volume of training data. Using our testbed (Section SECREF4 ), the total time of learning in our experiments varied from a minimum of 160 seconds, with INLINEFORM2 = 2048 and 25% of data, to a maximum of 22.5 hours, with INLINEFORM3 = 32768 and using 100% of the training data available (extracted from 5M tweets). These numbers give us an approximate figure of how time consuming it would be to train embeddings from the complete Twitter corpus we have, consisting of 300M tweets.",
"We now analyze the learning process itself. We plot the training set loss and validation set loss for the different values of INLINEFORM0 (Figure FIGREF28 left) with 40 epochs and using all the available data. As expected, the loss is reducing after each epoch, with validation loss, although being slightly higher, following the same trend. When using 100% we see no model overfitting. We can also observe that the higher is INLINEFORM1 the higher are the absolute values of the loss sets. This is not surprising because as the number of words to predict becomes higher the problem will tend to become harder. Also, because we keep the dimensionality of the embedding space constant (64 dimensions), it becomes increasingly hard to represent and differentiate larger vocabularies in the same hyper-volume. We believe this is a specially valuable indication for future experiments and for deciding the dimensionality of the final embeddings to distribute.",
"On the right side of Figure FIGREF28 we show how the number of training (and validation) examples affects the loss. For a fixed INLINEFORM0 = 32768 we varied the amount of data used for training from 25% to 100%. Three trends are apparent. As we train with more data, we obtain better validation losses. This was expected. The second trend is that by using less than 50% of the data available the model tends to overfit the data, as indicated by the consistent increase in the validation loss after about 15 epochs (check dashed lines in right side of Figure FIGREF28 ). This suggests that for the future we should not try any drastic reduction of the training data to save training time. Finally, when not overfitting, the validation loss seems to stabilize after around 20 epochs. We observed no phase-transition effects (the model seems simple enough for not showing that type of behavior). This indicates we have a practical way of safely deciding when to stop training the model."
],
[
"Table TABREF30 presents results for the three different tests described in Section SECREF4 . The first (expected) result is that the coverage metrics increase with the size of the vocabulary being embedded, i.e., INLINEFORM0 . Because the Word Equivalence test set was specifically created for evaluating Twitter-based embedding, when embedding INLINEFORM1 = 32768 words we achieve almost 90% test coverage. On the other hand, for the Class Distinction test set - which was created by doing the cross product of the test cases of each class in Class Membership test set - we obtain very low coverage figures. This indicates that it is not always possible to re-use previously compiled gold-standard data, and that it will be important to compile gold-standard data directly from Twitter content if we want to perform a more precise evaluation.",
"The effect of varying the cosine similarity decision threshold from 0.70 to 0.80 for Class Membership test shows that the percentage of classified as correct test cases drops significantly. However, the drop is more accentuated when training with only a portion of the available data. The differences of using two alternative thresholds values is even higher in the Word Equivalence test.",
"The Word Equivalence test, in which we consider two words equivalent word if the cosine of the embedding vectors is higher than 0.95, revealed to be an extremely demanding test. Nevertheless, for INLINEFORM0 = 32768 the results are far superior, and for a much larger coverage, than for lower INLINEFORM1 . The same happens with the Class Membership test.",
"On the other hand, the Class Distinction test shows a different trend for larger values of INLINEFORM0 = 32768 but the coverage for other values of INLINEFORM1 is so low that becomes difficult to hypothesize about the reduced values of True Negatives (TN) percentage obtained for the largest INLINEFORM2 . It would be necessary to confirm this behavior with even larger values of INLINEFORM3 . One might hypothesize that the ability to distinguish between classes requires larger thresholds when INLINEFORM4 is large. Also, we can speculate about the need of increasing the number of dimensions to be able to encapsulate different semantic information for so many words."
],
[
"Despite already providing interesting practical clues for our goal of trying to embed a larger vocabulary using more of the training data we have available, these results also revealed that the intrinsic evaluation metrics we are using are overly sensitive to their corresponding cosine similarity thresholds. This sensitivity poses serious challenges for further systematic exploration of word embedding architectures and their corresponding hyper-parameters, which was also observed in other recent works BIBREF15 .",
"By using these absolute thresholds as criteria for deciding similarity of words, we create a dependency between the evaluation metrics and the geometry of the embedded data. If we see the embedding data as a graph, this means that metrics will change if we apply scaling operations to certain parts of the graph, even if its structure (i.e. relative position of the embedded words) does not change.",
"For most practical purposes (including training downstream ML models) absolute distances have little meaning. What is fundamental is that the resulting embeddings are able to capture topological information: similar words should be closer to each other than they are to words that are dissimilar to them (under the various criteria of similarity we care about), independently of the absolute distances involved.",
"It is now clear that a key aspect for future work will be developing additional performance metrics based on topological properties. We are in line with recent work BIBREF16 , proposing to shift evaluation from absolute values to more exploratory evaluations focusing on weaknesses and strengths of the embeddings and not so much in generic scores. For example, one metric could consist in checking whether for any given word, all words that are known to belong to the same class are closer than any words belonging to different classes, independently of the actual cosine. Future work will necessarily include developing this type of metrics."
],
[
"Producing word embeddings from tweets is challenging due to the specificities of the vocabulary in the medium. We implemented a neural word embedding model that embeds words based on n-gram information extracted from a sample of the Portuguese Twitter stream, and which can be seen as a flexible baseline for further experiments in the field. Work reported in this paper is a preliminary study of trying to find parameters for training word embeddings from Twitter and adequate evaluation tests and gold-standard data.",
"Results show that using less than 50% of the available training examples for each vocabulary size might result in overfitting. The resulting embeddings obtain an interesting performance on intrinsic evaluation tests when trained a vocabulary containing the 32768 most frequent words in a Twitter sample of relatively small size. Nevertheless, results exhibit a skewness in the cosine similarity scores that should be further explored in future work. More specifically, the Class Distinction test set revealed to be challenging and opens the door to evaluation of not only similarity between words but also dissimilarities between words of different semantic classes without using absolute score values.",
"Therefore, a key area of future exploration has to do with better evaluation resources and metrics. We made some initial effort in this front. However, we believe that developing new intrinsic tests, agnostic to absolute values of metrics and concerned with topological aspects of the embedding space, and expanding gold-standard data with cases tailored for user-generated content, is of fundamental importance for the progress of this line of work.",
"Furthermore, we plan to make public available word embeddings trained from a large sample of 300M tweets collected from the Portuguese Twitter stream. This will require experimenting producing embeddings with higher dimensionality (to avoid the cosine skewness effect) and training with even larger vocabularies. Also, there is room for experimenting with some of the hyper-parameters of the model itself (e.g. activation functions, dimensions of the layers), which we know have impact on final results."
]
]
} | {
"question": [
"What new metrics are suggested to track progress?",
"What intrinsic evaluation metrics are used?",
"What experimental results suggest that using less than 50% of the available training examples might result in overfitting?"
],
"question_id": [
"982979cb3c71770d8d7d2d1be8f92b66223dec85",
"5ba6f7f235d0f5d1d01fd97dd5e4d5b0544fd212",
"7ce7edd06925a943e32b59f3e7b5159ccb7acaf6"
],
"nlp_background": [
"five",
"five",
"five"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"twitter",
"twitter",
"twitter"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" For example, one metric could consist in checking whether for any given word, all words that are known to belong to the same class are closer than any words belonging to different classes, independently of the actual cosine"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"It is now clear that a key aspect for future work will be developing additional performance metrics based on topological properties. We are in line with recent work BIBREF16 , proposing to shift evaluation from absolute values to more exploratory evaluations focusing on weaknesses and strengths of the embeddings and not so much in generic scores. For example, one metric could consist in checking whether for any given word, all words that are known to belong to the same class are closer than any words belonging to different classes, independently of the actual cosine. Future work will necessarily include developing this type of metrics."
],
"highlighted_evidence": [
"We are in line with recent work BIBREF16 , proposing to shift evaluation from absolute values to more exploratory evaluations focusing on weaknesses and strengths of the embeddings and not so much in generic scores. For example, one metric could consist in checking whether for any given word, all words that are known to belong to the same class are closer than any words belonging to different classes, independently of the actual cosine."
]
}
],
"annotation_id": [
"45d149671cf9fe75e00db47b656f3903653915d7"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Class Membership Tests",
"Class Distinction Test",
"Word Equivalence Test"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Tests and Gold-Standard Data for Intrinsic Evaluation",
"Using the gold standard data (described below), we performed three types of tests:",
"Class Membership Tests: embeddings corresponding two member of the same semantic class (e.g. “Months of the Year\", “Portuguese Cities\", “Smileys\") should be close, since they are supposed to be found in mostly the same contexts.",
"Class Distinction Test: this is the reciprocal of the previous Class Membership test. Embeddings of elements of different classes should be different, since words of different classes ere expected to be found in significantly different contexts.",
"Word Equivalence Test: embeddings corresponding to synonyms, antonyms, abbreviations (e.g. “porque\" abbreviated by “pq\") and partial references (e.g. “slb and benfica\") should be almost equal, since both alternatives are supposed to be used be interchangeable in all contexts (either maintaining or inverting the meaning).",
"Therefore, in our tests, two words are considered:",
"distinct if the cosine of the corresponding embeddings is lower than 0.70 (or 0.80).",
"to belong to the same class if the cosine of their embeddings is higher than 0.70 (or 0.80).",
"equivalent if the cosine of the embeddings is higher that 0.85 (or 0.95)."
],
"highlighted_evidence": [
"Tests and Gold-Standard Data for Intrinsic Evaluation\nUsing the gold standard data (described below), we performed three types of tests:\n\nClass Membership Tests: embeddings corresponding two member of the same semantic class (e.g. “Months of the Year\", “Portuguese Cities\", “Smileys\") should be close, since they are supposed to be found in mostly the same contexts.\n\nClass Distinction Test: this is the reciprocal of the previous Class Membership test. Embeddings of elements of different classes should be different, since words of different classes ere expected to be found in significantly different contexts.\n\nWord Equivalence Test: embeddings corresponding to synonyms, antonyms, abbreviations (e.g. “porque\" abbreviated by “pq\") and partial references (e.g. “slb and benfica\") should be almost equal, since both alternatives are supposed to be used be interchangeable in all contexts (either maintaining or inverting the meaning).\n\nTherefore, in our tests, two words are considered:\n\ndistinct if the cosine of the corresponding embeddings is lower than 0.70 (or 0.80).\n\nto belong to the same class if the cosine of their embeddings is higher than 0.70 (or 0.80).\n\nequivalent if the cosine of the embeddings is higher that 0.85 (or 0.95)."
]
},
{
"unanswerable": false,
"extractive_spans": [
"coverage metric",
"being distinct (cosine INLINEFORM0 0.7 or 0.8)",
"belonging to the same class (cosine INLINEFORM1 0.7 or 0.8)",
"being equivalent (cosine INLINEFORM2 0.85 or 0.95)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"For all these tests we computed a coverage metric. Our embeddings do not necessarily contain information for all the words contained in each of these tests. So, for all tests, we compute a coverage metric that measures the fraction of the gold-standard pairs that could actually be tested using the different embeddings produced. Then, for all the test pairs actually covered, we obtain the success metrics for each of the 3 tests by computing the ratio of pairs we were able to correctly classified as i) being distinct (cosine INLINEFORM0 0.7 or 0.8), ii) belonging to the same class (cosine INLINEFORM1 0.7 or 0.8), and iii) being equivalent (cosine INLINEFORM2 0.85 or 0.95)."
],
"highlighted_evidence": [
"For all these tests we computed a coverage metric. Our embeddings do not necessarily contain information for all the words contained in each of these tests. So, for all tests, we compute a coverage metric that measures the fraction of the gold-standard pairs that could actually be tested using the different embeddings produced.",
"Then, for all the test pairs actually covered, we obtain the success metrics for each of the 3 tests by computing the ratio of pairs we were able to correctly classified as i) being distinct (cosine INLINEFORM0 0.7 or 0.8), ii) belonging to the same class (cosine INLINEFORM1 0.7 or 0.8), and iii) being equivalent (cosine INLINEFORM2 0.85 or 0.95)."
]
}
],
"annotation_id": [
"5b95a3c6959abfd9d7011bf633e6275a25ac80e4",
"dac6a1aecadafa7ee208f8900a38ec11ad12fa2f"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"consistent increase in the validation loss after about 15 epochs"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"On the right side of Figure FIGREF28 we show how the number of training (and validation) examples affects the loss. For a fixed INLINEFORM0 = 32768 we varied the amount of data used for training from 25% to 100%. Three trends are apparent. As we train with more data, we obtain better validation losses. This was expected. The second trend is that by using less than 50% of the data available the model tends to overfit the data, as indicated by the consistent increase in the validation loss after about 15 epochs (check dashed lines in right side of Figure FIGREF28 ). This suggests that for the future we should not try any drastic reduction of the training data to save training time. Finally, when not overfitting, the validation loss seems to stabilize after around 20 epochs. We observed no phase-transition effects (the model seems simple enough for not showing that type of behavior). This indicates we have a practical way of safely deciding when to stop training the model."
],
"highlighted_evidence": [
"The second trend is that by using less than 50% of the data available the model tends to overfit the data, as indicated by the consistent increase in the validation loss after about 15 epochs (check dashed lines in right side of Figure FIGREF28 )."
]
}
],
"annotation_id": [
"c420b34ca1e7a288443bbfb7b81a7fcbd3b002b2"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Table 1. Number of 5-grams available for training for different sizes of target vocabulary |V |",
"Table 2. Overall statistics for 12 combinations of models learned varying |V | and volume of training data. Results observed after 40 training epochs.",
"Fig. 1. Continuous line represents loss in the training data while dashed line represents loss in the validation data. Left side: effect of increasing |V | using 100% of training data. Right side: effect of varying the amount of training data used with |V | = 32768.",
"Table 3. Evaluation of resulting embeddings using Class Membership, Class Distinction and Word Equivalence tests for different thresholds of cosine similarity."
],
"file": [
"6-Table1-1.png",
"8-Table2-1.png",
"8-Figure1-1.png",
"10-Table3-1.png"
]
} |
1909.08859 | Procedural Reasoning Networks for Understanding Multimodal Procedures | This paper addresses the problem of comprehending procedural commonsense knowledge. This is a challenging task as it requires identifying key entities, keeping track of their state changes, and understanding temporal and causal relations. Contrary to most of the previous work, in this study, we do not rely on strong inductive bias and explore the question of how multimodality can be exploited to provide a complementary semantic signal. Towards this end, we introduce a new entity-aware neural comprehension model augmented with external relational memory units. Our model learns to dynamically update entity states in relation to each other while reading the text instructions. Our experimental analysis on the visual reasoning tasks in the recently proposed RecipeQA dataset reveals that our approach improves the accuracy of the previously reported models by a large margin. Moreover, we find that our model learns effective dynamic representations of entities even though we do not use any supervision at the level of entity states. | {
"section_name": [
"Introduction",
"Visual Reasoning in RecipeQA",
"Procedural Reasoning Networks",
"Procedural Reasoning Networks ::: Input Module",
"Procedural Reasoning Networks ::: Reasoning Module",
"Procedural Reasoning Networks ::: Attention Module",
"Procedural Reasoning Networks ::: Modeling Module",
"Procedural Reasoning Networks ::: Output Module",
"Experiments",
"Experiments ::: Entity Extraction",
"Experiments ::: Training Details",
"Experiments ::: Baselines",
"Experiments ::: Results",
"Related Work",
"Conclusion",
"Acknowledgements"
],
"paragraphs": [
[
"A great deal of commonsense knowledge about the world we live is procedural in nature and involves steps that show ways to achieve specific goals. Understanding and reasoning about procedural texts (e.g. cooking recipes, how-to guides, scientific processes) are very hard for machines as it demands modeling the intrinsic dynamics of the procedures BIBREF0, BIBREF1, BIBREF2. That is, one must be aware of the entities present in the text, infer relations among them and even anticipate changes in the states of the entities after each action. For example, consider the cheeseburger recipe presented in Fig. FIGREF2. The instruction “salt and pepper each patty and cook for 2 to 3 minutes on the first side” in Step 5 entails mixing three basic ingredients, the ground beef, salt and pepper, together and then applying heat to the mix, which in turn causes chemical changes that alter both the appearance and the taste. From a natural language understanding perspective, the main difficulty arises when a model sees the word patty again at a later stage of the recipe. It still corresponds to the same entity, but its form is totally different.",
"Over the past few years, many new datasets and approaches have been proposed that address this inherently hard problem BIBREF0, BIBREF1, BIBREF3, BIBREF4. To mitigate the aforementioned challenges, the existing works rely mostly on heavy supervision and focus on predicting the individual state changes of entities at each step. Although these models can accurately learn to make local predictions, they may lack global consistency BIBREF3, BIBREF4, not to mention that building such annotated corpora is very labor-intensive. In this work, we take a different direction and explore the problem from a multimodal standpoint. Our basic motivation, as illustrated in Fig. FIGREF2, is that accompanying images provide complementary cues about causal effects and state changes. For instance, it is quite easy to distinguish raw meat from cooked one in visual domain.",
"In particular, we take advantage of recently proposed RecipeQA dataset BIBREF2, a dataset for multimodal comprehension of cooking recipes, and ask whether it is possible to have a model which employs dynamic representations of entities in answering questions that require multimodal understanding of procedures. To this end, inspired from BIBREF5, we propose Procedural Reasoning Networks (PRN) that incorporates entities into the comprehension process and allows to keep track of entities, understand their interactions and accordingly update their states across time. We report that our proposed approach significantly improves upon previously published results on visual reasoning tasks in RecipeQA, which test understanding causal and temporal relations from images and text. We further show that the dynamic entity representations can capture semantics of the state information in the corresponding steps."
],
[
"In our study, we particularly focus on the visual reasoning tasks of RecipeQA, namely visual cloze, visual coherence, and visual ordering tasks, each of which examines a different reasoning skill. We briefly describe these tasks below.",
"Visual Cloze. In the visual cloze task, the question is formed by a sequence of four images from consecutive steps of a recipe where one of them is replaced by a placeholder. A model should select the correct one from a multiple-choice list of four answer candidates to fill in the missing piece. In that regard, the task inherently requires aligning visual and textual information and understanding temporal relationships between the cooking actions and the entities.",
"Visual Coherence. The visual coherence task tests the ability to identify the image within a sequence of four images that is inconsistent with the text instructions of a cooking recipe. To succeed in this task, a model should have a clear understanding of the procedure described in the recipe and at the same time connect language and vision.",
"Visual Ordering. The visual ordering task is about grasping the temporal flow of visual events with the help of the given recipe text. The questions show a set of four images from the recipe and the task is to sort jumbled images into the correct order. Here, a model needs to infer the temporal relations between the images and align them with the recipe steps."
],
[
"In the following, we explain our Procedural Reasoning Networks model. Its architecture is based on a bi-directional attention flow (BiDAF) model BIBREF6, but also equipped with an explicit reasoning module that acts on entity-specific relational memory units. Fig. FIGREF4 shows an overview of the network architecture. It consists of five main modules: An input module, an attention module, a reasoning module, a modeling module, and an output module. Note that the question answering tasks we consider here are multimodal in that while the context is a procedural text, the question and the multiple choice answers are composed of images.",
"Input Module extracts vector representations of inputs at different levels of granularity by using several different encoders.",
"Reasoning Module scans the procedural text and tracks the states of the entities and their relations through a recurrent relational memory core unit BIBREF5.",
"Attention Module computes context-aware query vectors and query-aware context vectors as well as query-aware memory vectors.",
"Modeling Module employs two multi-layered RNNs to encode previous layers outputs.",
"Output Module scores a candidate answer from the given multiple-choice list.",
"At a high level, as the model is reading the cooking recipe, it continually updates the internal memory representations of the entities (ingredients) based on the content of each step – it keeps track of changes in the states of the entities, providing an entity-centric summary of the recipe. The response to a question and a possible answer depends on the representation of the recipe text as well as the last states of the entities. All this happens in a series of implicit relational reasoning steps and there is no need for explicitly encoding the state in terms of a predefined vocabulary."
],
[
"Let the triple $(\\mathbf {R},\\mathbf {Q},\\mathbf {A})$ be a sample input. Here, $\\mathbf {R}$ denotes the input recipe which contains textual instructions composed of $N$ words in total. $\\mathbf {Q}$ represents the question that consists of a sequence of $M$ images. $\\mathbf {A}$ denotes an answer that is either a single image or a series of $L$ images depending on the reasoning task. In particular, for the visual cloze and the visual coherence type questions, the answer contains a single image ($L=1$) and for the visual ordering task, it includes a sequence.",
"We encode the input recipe $\\mathbf {R}$ at character, word, and step levels. Character-level embedding layer uses a convolutional neural network, namely CharCNN model by BIBREF7, which outputs character level embeddings for each word and alleviates the issue of out-of-vocabulary (OOV) words. In word embedding layer, we use a pretrained GloVe model BIBREF8 and extract word-level embeddings. The concatenation of the character and the word embeddings are then fed to a two-layer highway network BIBREF10 to obtain a contextual embedding for each word in the recipe. This results in the matrix $\\mathbf {R}^{\\prime } \\in \\mathbb {R}^{2d \\times N}$.",
"On top of these layers, we have another layer that encodes the steps of the recipe in an individual manner. Specifically, we obtain a step-level contextual embedding of the input recipe containing $T$ steps as $\\mathcal {S}=(\\mathbf {s}_1,\\mathbf {s}_2,\\dots ,\\mathbf {s}_T)$ where $\\mathbf {s}_i$ represents the final state of a BiLSTM encoding the $i$-th step of the recipe obtained from the character and word-level embeddings of the tokens exist in the corresponding step.",
"We represent both the question $\\mathbf {Q}$ and the answer $\\mathbf {A}$ in terms of visual embeddings. Here, we employ a pretrained ResNet-50 model BIBREF11 trained on ImageNet dataset BIBREF12 and represent each image as a real-valued 2048-d vector using features from the penultimate average-pool layer. Then these embeddings are passed first to a multilayer perceptron (MLP) and then its outputs are fed to a BiLSTM. We then form a matrix $\\mathbf {Q}^{\\prime } \\in \\mathbb {R}^{2d \\times M}$ for the question by concatenating the cell states of the BiLSTM. For the visual ordering task, to represent the sequence of images in the answer with a single vector, we additionally use a BiLSTM and define the answering embedding by the summation of the cell states of the BiLSTM. Finally, for all tasks, these computations produce answer embeddings denoted by $\\mathbf {a} \\in \\mathbb {R}^{2d \\times 1}$."
],
[
"As mentioned before, comprehending a cooking recipe is mostly about entities (basic ingredients) and actions (cooking activities) described in the recipe instructions. Each action leads to changes in the states of the entities, which usually affects their visual characteristics. A change rarely occurs in isolation; in most cases, the action affects multiple entities at once. Hence, in our reasoning module, we have an explicit memory component implemented with relational memory units BIBREF5. This helps us to keep track of the entities, their state changes and their relations in relation to each other over the course of the recipe (see Fig. FIGREF14). As we will examine in more detail in Section SECREF4, it also greatly improves the interpretability of model outputs.",
"Specifically, we set up the memory with a memory matrix $\\mathbf {E} \\in \\mathbb {R}^{d_E \\times K}$ by extracting $K$ entities (ingredients) from the first step of the recipe. We initialize each memory cell $\\mathbf {e}_i$ representing a specific entity by its CharCNN and pre-trained GloVe embeddings. From now on, we will use the terms memory cells and entities interchangeably throughout the paper. Since the input recipe is given in the form of a procedural text decomposed into a number of steps, we update the memory cells after each step, reflecting the state changes happened on the entities. This update procedure is modelled via a relational recurrent neural network (R-RNN), recently proposed by BIBREF5. It is built on a 2-dimensional LSTM model whose matrix of cell states represent our memory matrix $\\mathbf {E}$. Here, each row $i$ of the matrix $\\mathbf {E}$ refers to a specific entity $\\mathbf {e}_i$ and is updated after each recipe step $t$ as follows:",
"where $\\mathbf {s}_{t}$ denotes the embedding of recipe step $t$ and $\\mathbf {\\phi }_{i,t}=(\\mathbf {h}_{i,t},\\mathbf {e}_{i,t})$ is the cell state of the R-RNN at step $t$ with $\\mathbf {h}_{i,t}$ and $\\mathbf {e}_{i,t}$ being the $i$-th row of the hidden state of the R-RNN and the dynamic representation of entity $\\mathbf {e}_{i}$ at the step $t$, respectively. The R-RNN model exploits a multi-headed self-attention mechanism BIBREF13 that allows memory cells to interact with each other and attend multiple locations simultaneously during the update phase.",
"In Fig. FIGREF14, we illustrate how this interaction takes place in our relational memory module by considering a sample cooking recipe and by presenting how the attention matrix changes throughout the recipe. In particular, the attention matrix at a specific time shows the attention flow from one entity (memory cell) to another along with the attention weights to the corresponding recipe step (offset column). The color intensity shows the magnitude of the attention weights. As can be seen from the figure, the internal representations of the entities are actively updated at each step. Moreover, as argued in BIBREF5, this can be interpreted as a form of relational reasoning as each update on a specific memory cell is operated in relation to others. Here, we should note that it is often difficult to make sense of these attention weights. However, we observe that the attention matrix changes very gradually near the completion of the recipe."
],
[
"Attention module is in charge of linking the question with the recipe text and the entities present in the recipe. It takes the matrices $\\mathbf {Q^{\\prime }}$ and $\\mathbf {R}^{\\prime }$ from the input module, and $\\mathbf {E}$ from the reasoning module and constructs the question-aware recipe representation $\\mathbf {G}$ and the question-aware entity representation $\\mathbf {Y}$. Following the attention flow mechanism described in BIBREF14, we specifically calculate attentions in four different directions: (1) from question to recipe, (2) from recipe to question, (3) from question to entities, and (4) from entities to question.",
"The first two of these attentions require computing a shared affinity matrix $\\mathbf {S}^R \\in \\mathbb {R}^{N \\times M}$ with $\\mathbf {S}^R_{i,j}$ indicating the similarity between $i$-th recipe word and $j$-th image in the question estimated by",
"where $\\mathbf {w}^{\\top }_{R}$ is a trainable weight vector, $\\circ $ and $[;]$ denote elementwise multiplication and concatenation operations, respectively.",
"Recipe-to-question attention determines the images within the question that is most relevant to each word of the recipe. Let $\\mathbf {\\tilde{Q}} \\in \\mathbb {R}^{2d \\times N}$ represent the recipe-to-question attention matrix with its $i$-th column being given by $ \\mathbf {\\tilde{Q}}_i=\\sum _j \\mathbf {a}_{ij}\\mathbf {Q}^{\\prime }_j$ where the attention weight is computed by $\\mathbf {a}_i=\\operatorname{softmax}(\\mathbf {S}^R_{i}) \\in \\mathbb {R}^M$.",
"Question-to-recipe attention signifies the words within the recipe that have the closest similarity to each image in the question, and construct an attended recipe vector given by $ \\tilde{\\mathbf {r}}=\\sum _{i}\\mathbf {b}_i\\mathbf {R}^{\\prime }_i$ with the attention weight is calculated by $\\mathbf {b}=\\operatorname{softmax}(\\operatorname{max}_{\\mathit {col}}(\\mathbf {S}^R)) \\in \\mathbb {R}^{N}$ where $\\operatorname{max}_{\\mathit {col}}$ denotes the maximum function across the column. The question-to-recipe matrix is then obtained by replicating $\\tilde{\\mathbf {r}}$ $N$ times across the column, giving $\\tilde{\\mathbf {R}} \\in \\mathbb {R}^{2d \\times N}$.",
"Then, we construct the question aware representation of the input recipe, $\\mathbf {G}$, with its $i$-th column $\\mathbf {G}_i \\in \\mathbb {R}^{8d \\times N}$ denoting the final embedding of $i$-th word given by",
"Attentions from question to entities, and from entities to question are computed in a way similar to the ones described above. The only difference is that it uses a different shared affinity matrix to be computed between the memory encoding entities $\\mathbf {E}$ and the question $\\mathbf {Q}^{\\prime }$. These attentions are then used to construct the question aware representation of entities, denoted by $\\mathbf {Y}$, that links and integrates the images in the question and the entities in the input recipe."
],
[
"Modeling module takes the question-aware representations of the recipe $\\mathbf {G}$ and the entities $\\mathbf {Y}$, and forms their combined vector representation. For this purpose, we first use a two-layer BiLSTM to read the question-aware recipe $\\mathbf {G}$ and to encode the interactions among the words conditioned on the question. For each direction of BiLSTM , we use its hidden state after reading the last token as its output. In the end, we obtain a vector embedding $\\mathbf {c} \\in \\mathbb {R}^{2d \\times 1}$. Similarly, we employ a second BiLSTM, this time, over the entities $\\mathbf {Y}$, which results in another vector embedding $\\mathbf {f} \\in \\mathbb {R}^{2d_E \\times 1}$. Finally, these vector representations are concatenated and then projected to a fixed size representation using $\\mathbf {o}=\\varphi _o(\\left[\\mathbf {c}; \\mathbf {f}\\right]) \\in \\mathbb {R}^{2d \\times 1}$ where $\\varphi _o$ is a multilayer perceptron with $\\operatorname{tanh}$ activation function."
],
[
"The output module takes the output of the modeling module, encoding vector embeddings of the question-aware recipe and the entities $\\mathbf {Y}$, and the embedding of the answer $\\mathbf {A}$, and returns a similarity score which is used while determining the correct answer. Among all the candidate answer, the one having the highest similarity score is chosen as the correct answer. To train our proposed procedural reasoning network, we employ a hinge ranking loss BIBREF15, similar to the one used in BIBREF2, given below.",
"where $\\gamma $ is the margin parameter, $\\mathbf {a}_+$ and $\\mathbf {a}_{-}$ are the correct and the incorrect answers, respectively."
],
[
"In this section, we describe our experimental setup and then analyze the results of the proposed Procedural Reasoning Networks (PRN) model."
],
[
"Given a recipe, we automatically extract the entities from the initial step of a recipe by using a dictionary of ingredients. While determining the ingredients, we exploit Recipe1M BIBREF16 and Kaggle What’s Cooking Recipes BIBREF17 datasets, and form our dictionary using the most commonly used ingredients in the training set of RecipeQA. For the cases when no entity can be extracted from the recipe automatically (20 recipes in total), we manually annotate those recipes with the related entities."
],
[
"In our experiments, we separately trained models on each task, as well as we investigated multi-task learning where a single model is trained to solve all these tasks at once. In total, the PRN architecture consists of $\\sim $12M trainable parameters. We implemented our models in PyTorch BIBREF18 using AllenNLP library BIBREF6. We used Adam optimizer with a learning rate of 1e-4 with an early stopping criteria with the patience set to 10 indicating that the training procedure ends after 10 iterations if the performance would not improve. We considered a batch size of 32 due to our hardware constraints. In the multi-task setting, batches are sampled round-robin from all tasks, where each batch is solely composed of examples from one task. We performed our experiments on a system containing four NVIDIA GTX-1080Ti GPUs, and training a single model took around 2 hours. We employed the same hyperparameters for all the baseline systems. We plan to share our code and model implementation after the review process."
],
[
"We compare our model with several baseline models as described below. We note that the results of the first two are previously reported in BIBREF2.",
"Hasty Student BIBREF2 is a heuristics-based simple model which ignores the recipe and gives an answer by examining only the question and the answer set using distances in the visual feature space.",
"Impatient Reader BIBREF19 is a simple neural model that takes its name from the fact that it repeatedly computes attention over the recipe after observing each image in the query.",
"BiDAF BIBREF14 is a strong reading comprehension model that employs a bi-directional attention flow mechanism to obtain a question-aware representation and bases its predictions on this representation. Originally, it is a span-selection model from the input context. Here, we adapt it to work in a multimodal setting and answer multiple choice questions instead.",
"BiDAF w/ static memory is an extended version of the BiDAF model which resembles our proposed PRN model in that it includes a memory unit for the entities. However, it does not make any updates on the memory cells. That is, it uses the static entity embeeddings initialized with GloVe word vectors. We propose this baseline to test the significance of the use of relational memory updates."
],
[
"Table TABREF29 presents the quantitative results for the visual reasoning tasks in RecipeQA. In single-task training setting, PRN gives state-of-the-art results compared to other neural models. Moreover, it achieves the best performance on average. These results demonstrate the importance of having a dynamic memory and keeping track of entities extracted from the recipe. In multi-task training setting where a single model is trained to solve all the tasks at once, PRN and BIDAF w/ static memory perform comparably and give much better results than BIDAF. Note that the model performances in the multi-task training setting are worse than single-task performances. We believe that this is due to the nature of the tasks that some are more difficult than the others. We think that the performance could be improved by employing a carefully selected curriculum strategy BIBREF20.",
"In Fig. FIGREF28, we illustrate the entity embeddings space by projecting the learned embeddings from the step-by-step memory snapshots through time with t-SNE to 3-d space from 200-d vector space. Color codes denote the categories of the cooking recipes. As can be seen, these step-aware embeddings show clear clustering of these categories. Moreover, within each cluster, the entities are grouped together in terms of their state characteristics. For instance, in the zoomed parts of the figure, chopped and sliced, or stirred and whisked entities are placed close to each other.",
"Fig. FIGREF30 demonstrates the entity arithmetics using the learned embeddings from each entity step. Here, we show that the learned embedding from the memory snapshots can effectively capture the contextual information about the entities at each time point in the corresponding step while taking into account of the recipe data. This basic arithmetic operation suggests that the proposed model can successfully capture the semantics of each entity's state in the corresponding step."
],
[
"In recent years, tracking entities and their state changes have been explored in the literature from a variety of perspectives. In an early work, BIBREF21 proposed a dynamic memory based network which updates entity states using a gating mechanism while reading the text. BIBREF22 presented a more structured memory augmented model which employs memory slots for representing both entities and their relations. BIBREF23 suggested a conceptually similar model in which the pairwise relations between attended memories are utilized to encode the world state. The main difference between our approach and these works is that by utilizing relational memory core units we also allow memories to interact with each other during each update.",
"BIBREF24 showed that similar ideas can be used to compile supporting memories in tracking dialogue state. BIBREF25 has shown the importance of coreference signals for reading comprehension task. More recently, BIBREF26 introduced a specialized recurrent layer which uses coreference annotations for improving reading comprehension tasks. On language modeling task, BIBREF27 proposed a language model which can explicitly incorporate entities while dynamically updating their representations for a variety of tasks such as language modeling, coreference resolution, and entity prediction.",
"Our work builds upon and contributes to the growing literature on tracking states changes in procedural text. BIBREF0 presented a neural model that can learn to explicitly predict state changes of ingredients at different points in a cooking recipe. BIBREF1 proposed another entity-aware model to track entity states in scientific processes. BIBREF3 demonstrated that the prediction quality can be boosted by including hard and soft constraints to eliminate unlikely or favor probable state changes. In a follow-up work, BIBREF4 exploited the notion of label consistency in training to enforce similar predictions in similar procedural contexts. BIBREF28 proposed a model that dynamically constructs a knowledge graph while reading the procedural text to track the ever-changing entities states. As discussed in the introduction, however, these previous methods use a strong inductive bias and assume that state labels are present during training. In our study, we deliberately focus on unlabeled procedural data and ask the question: Can multimodality help to identify and provide insights to understanding state changes."
],
[
"We have presented a new neural architecture called Procedural Reasoning Networks (PRN) for multimodal understanding of step-by-step instructions. Our proposed model is based on the successful BiDAF framework but also equipped with an explicit memory unit that provides an implicit mechanism to keep track of the changes in the states of the entities over the course of the procedure. Our experimental analysis on visual reasoning tasks in the RecipeQA dataset shows that the model significantly improves the results of the previous models, indicating that it better understands the procedural text and the accompanying images. Additionally, we carefully analyze our results and find that our approach learns meaningful dynamic representations of entities without any entity-level supervision. Although we achieve state-of-the-art results on RecipeQA, clearly there is still room for improvement compared to human performance. We also believe that the PRN architecture will be of value to other visual and textual sequential reasoning tasks."
],
[
"We thank the anonymous reviewers and area chairs for their invaluable feedback. This work was supported by TUBA GEBIP fellowship awarded to E. Erdem; and by the MMVC project via an Institutional Links grant (Project No. 217E054) under the Newton-Katip Çelebi Fund partnership funded by the Scientific and Technological Research Council of Turkey (TUBITAK) and the British Council. We also thank NVIDIA Corporation for the donation of GPUs used in this research."
]
]
} | {
"question": [
"What multimodality is available in the dataset?",
"What are previously reported models?",
"How better is accuracy of new model compared to previously reported models?"
],
"question_id": [
"a883bb41449794e0a63b716d9766faea034eb359",
"5d83b073635f5fd8cd1bdb1895d3f13406583fbd",
"171ebfdc9b3a98e4cdee8f8715003285caeb2f39"
],
"nlp_background": [
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"context is a procedural text, the question and the multiple choice answers are composed of images"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In the following, we explain our Procedural Reasoning Networks model. Its architecture is based on a bi-directional attention flow (BiDAF) model BIBREF6, but also equipped with an explicit reasoning module that acts on entity-specific relational memory units. Fig. FIGREF4 shows an overview of the network architecture. It consists of five main modules: An input module, an attention module, a reasoning module, a modeling module, and an output module. Note that the question answering tasks we consider here are multimodal in that while the context is a procedural text, the question and the multiple choice answers are composed of images."
],
"highlighted_evidence": [
"Note that the question answering tasks we consider here are multimodal in that while the context is a procedural text, the question and the multiple choice answers are composed of images."
]
},
{
"unanswerable": false,
"extractive_spans": [
"images and text"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In particular, we take advantage of recently proposed RecipeQA dataset BIBREF2, a dataset for multimodal comprehension of cooking recipes, and ask whether it is possible to have a model which employs dynamic representations of entities in answering questions that require multimodal understanding of procedures. To this end, inspired from BIBREF5, we propose Procedural Reasoning Networks (PRN) that incorporates entities into the comprehension process and allows to keep track of entities, understand their interactions and accordingly update their states across time. We report that our proposed approach significantly improves upon previously published results on visual reasoning tasks in RecipeQA, which test understanding causal and temporal relations from images and text. We further show that the dynamic entity representations can capture semantics of the state information in the corresponding steps."
],
"highlighted_evidence": [
"In particular, we take advantage of recently proposed RecipeQA dataset BIBREF2, a dataset for multimodal comprehension of cooking recipes, and ask whether it is possible to have a model which employs dynamic representations of entities in answering questions that require multimodal understanding of procedures. ",
"We report that our proposed approach significantly improves upon previously published results on visual reasoning tasks in RecipeQA, which test understanding causal and temporal relations from images and text. "
]
}
],
"annotation_id": [
"4e5d6e5c9fcd614bd589bc0ea42cc2997bcf28eb",
"9a39d77579baa6cde733cb84ad043de21ec9d0d5"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Hasty Student",
"Impatient Reader",
"BiDAF",
"BiDAF w/ static memory"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We compare our model with several baseline models as described below. We note that the results of the first two are previously reported in BIBREF2.",
"Hasty Student BIBREF2 is a heuristics-based simple model which ignores the recipe and gives an answer by examining only the question and the answer set using distances in the visual feature space.",
"Impatient Reader BIBREF19 is a simple neural model that takes its name from the fact that it repeatedly computes attention over the recipe after observing each image in the query.",
"BiDAF BIBREF14 is a strong reading comprehension model that employs a bi-directional attention flow mechanism to obtain a question-aware representation and bases its predictions on this representation. Originally, it is a span-selection model from the input context. Here, we adapt it to work in a multimodal setting and answer multiple choice questions instead.",
"BiDAF w/ static memory is an extended version of the BiDAF model which resembles our proposed PRN model in that it includes a memory unit for the entities. However, it does not make any updates on the memory cells. That is, it uses the static entity embeeddings initialized with GloVe word vectors. We propose this baseline to test the significance of the use of relational memory updates."
],
"highlighted_evidence": [
"We compare our model with several baseline models as described below. We note that the results of the first two are previously reported in BIBREF2.\n\nHasty Student BIBREF2 is a heuristics-based simple model which ignores the recipe and gives an answer by examining only the question and the answer set using distances in the visual feature space.\n\nImpatient Reader BIBREF19 is a simple neural model that takes its name from the fact that it repeatedly computes attention over the recipe after observing each image in the query.\n\nBiDAF BIBREF14 is a strong reading comprehension model that employs a bi-directional attention flow mechanism to obtain a question-aware representation and bases its predictions on this representation. Originally, it is a span-selection model from the input context.",
"BiDAF w/ static memory is an extended version of the BiDAF model which resembles our proposed PRN model in that it includes a memory unit for the entities."
]
}
],
"annotation_id": [
"4c7a5de9be2822f80cb4ff3b2b5e2467f53c3668"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Average accuracy of proposed model vs best prevous result:\nSingle-task Training: 57.57 vs 55.06\nMulti-task Training: 50.17 vs 50.59",
"evidence": [
"Table TABREF29 presents the quantitative results for the visual reasoning tasks in RecipeQA. In single-task training setting, PRN gives state-of-the-art results compared to other neural models. Moreover, it achieves the best performance on average. These results demonstrate the importance of having a dynamic memory and keeping track of entities extracted from the recipe. In multi-task training setting where a single model is trained to solve all the tasks at once, PRN and BIDAF w/ static memory perform comparably and give much better results than BIDAF. Note that the model performances in the multi-task training setting are worse than single-task performances. We believe that this is due to the nature of the tasks that some are more difficult than the others. We think that the performance could be improved by employing a carefully selected curriculum strategy BIBREF20.",
"FLOAT SELECTED: Table 1: Quantitative comparison of the proposed PRN model against the baselines."
],
"highlighted_evidence": [
"Table TABREF29 presents the quantitative results for the visual reasoning tasks in RecipeQA. In single-task training setting, PRN gives state-of-the-art results compared to other neural models.",
"In multi-task training setting where a single model is trained to solve all the tasks at once, PRN and BIDAF w/ static memory perform comparably and give much better results than BIDAF.",
"FLOAT SELECTED: Table 1: Quantitative comparison of the proposed PRN model against the baselines."
]
}
],
"annotation_id": [
"ec1378e356486a4ae207f3c0cd9adc9dab841863"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Figure 1: A recipe for preparing a cheeseburger (adapted from the cooking instructions available at https: //www.instructables.com/id/In-N-Out-Double-Double-Cheeseburger-Copycat). Each basic ingredient (entity) is highlighted by a different color in the text and with bounding boxes on the accompanying images. Over the course of the recipe instructions, ingredients interact with each other, change their states by each cooking action (underlined in the text), which in turn alter the visual and physical properties of entities. For instance, the tomato changes it form by being sliced up and then stacked on a hamburger bun.",
"Figure 2: An illustration of our Procedural Reasoning Networks (PRN). For a sample question from visual coherence task in RecipeQA, while reading the cooking recipe, the model constantly performs updates on the representations of the entities (ingredients) after each step and makes use of their representations along with the whole recipe when it scores a candidate answer. Please refer to the main text for more details.",
"Figure 3: Sample visualizations of the self-attention weights demonstrating both the interactions among the ingredients and between the ingredients and the textual instructions throughout the steps of a sample cooking recipe from RecipeQA (darker colors imply higher attention weights). The attention maps do not change much after the third step as the steps after that mostly provide some redundant information about the completed recipe.",
"Figure 4: t-SNE visualizations of learned embeddings from each memory snapshot mapping to each entity and their corresponding states from each step for visual cloze task.",
"Table 1: Quantitative comparison of the proposed PRN model against the baselines.",
"Figure 5: Step-aware entity representations can be used to discover the changes occurred in the states of the ingredients between two different recipe steps. The difference vector between two entities can then be added to other entities to find their next states. For instance, in the first example, the difference vector encodes the chopping action done on onions. In the second example, it encodes the pouring action done on the water. When these vectors are added to the representations of raw tomatoes and milk, the three most likely next states capture the semantics of state changes in an accurate manner."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"5-Figure3-1.png",
"7-Figure4-1.png",
"7-Table1-1.png",
"8-Figure5-1.png"
]
} |
1908.08419 | Active Learning for Chinese Word Segmentation in Medical Text | Electronic health records (EHRs) stored in hospital information systems completely reflect the patients' diagnosis and treatment processes, which are essential to clinical data mining. Chinese word segmentation (CWS) is a fundamental and important task for Chinese natural language processing. Currently, most state-of-the-art CWS methods greatly depend on large-scale manually-annotated data, which is a very time-consuming and expensive work, specially for the annotation in medical field. In this paper, we present an active learning method for CWS in medical text. To effectively utilize complete segmentation history, a new scoring model in sampling strategy is proposed, which combines information entropy with neural network. Besides, to capture interactions between adjacent characters, K-means clustering features are additionally added in word segmenter. We experimentally evaluate our proposed CWS method in medical text, experimental results based on EHRs collected from the Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine show that our proposed method outperforms other reference methods, which can effectively save the cost of manual annotation. | {
"section_name": [
"Introduction",
"Chinese Word Segmentation",
"Active Learning",
"Active Learning for Chinese Word Segmentation",
"CRF-based Word Segmenter",
"Information Entropy Based Scoring Model",
"Datasets",
"Parameter Settings",
"Experimental Results",
"Conclusion and Future Work",
"Acknowledgment"
],
"paragraphs": [
[
"Electronic health records (EHRs) systematically collect patients' clinical information, such as health profiles, histories of present illness, past medical histories, examination results and treatment plans BIBREF0 . By analyzing EHRs, many useful information, closely related to patients, can be discovered BIBREF1 . Since Chinese EHRs are recorded without explicit word delimiters (e.g., “UTF8gkai糖尿病酮症酸中毒” (diabetic ketoacidosis)), Chinese word segmentation (CWS) is a prerequisite for processing EHRs. Currently, state-of-the-art CWS methods usually require large amounts of manually-labeled data to reach their full potential. However, there are many challenges inherent in labeling EHRs. First, EHRs have many medical terminologies, such as “UTF8gkai高血压性心脏病” (hypertensive heart disease) and “UTF8gkai罗氏芬” (Rocephin), so only annotators with medical backgrounds can be qualified to label EHRs. Second, EHRs may involve personal privacies of patients. Therefore, they cannot be openly published on a large scale for labeling. The above two problems lead to the high annotation cost and insufficient training corpus in the research of CWS in medical text.",
"CWS was usually formulated as a sequence labeling task BIBREF2 , which can be solved by supervised learning approaches, such as hidden markov model (HMM) BIBREF3 and conditional random field (CRF) BIBREF4 . However, these methods rely heavily on handcrafted features. To relieve the efforts of feature engineering, neural network-based methods are beginning to thrive BIBREF5 , BIBREF6 , BIBREF7 . However, due to insufficient annotated training data, conventional models for CWS trained on open corpus often suffer from significant performance degradation when transferred to a domain-specific text. Moreover, the task in medical domain is rarely dabbled, and only one related work on transfer learning is found in recent literatures BIBREF8 . However, researches related to transfer learning mostly remain in general domains, causing a major problem that a considerable amount of manually annotated data is required, when introducing the models into specific domains.",
"One of the solutions for this obstacle is to use active learning, where only a small scale of samples are selected and labeled in an active manner. Active learning methods are favored by the researchers in many natural language processing (NLP) tasks, such as text classification BIBREF9 and named entity recognition (NER) BIBREF10 . However, only a handful of works are conducted on CWS BIBREF2 , and few focuses on medical domain tasks.",
"Given the aforementioned challenges and current researches, we propose a word segmentation method based on active learning. To model the segmentation history, we incorporate a sampling strategy consisting of word score, link score and sequence score, which effectively evaluates the segmentation decisions. Specifically, we combine information branch and gated neural network to determine if the segment is a legal word, i.e., word score. Meanwhile, we use the hidden layer output of the long short-term memory (LSTM) BIBREF11 to find out how the word is linked to its surroundings, i.e., link score. The final decision on the selection of labeling samples is made by calculating the average of word and link scores on the whole segmented sentence, i.e., sequence score. Besides, to capture coherence over characters, we additionally add K-means clustering features to the input of CRF-based word segmenter.",
"To sum up, the main contributions of our work are summarized as follows:",
"The rest of this paper is organized as follows. Section SECREF2 briefly reviews the related work on CWS and active learning. Section SECREF3 presents an active learning method for CWS. We experimentally evaluate our proposed method in Section SECREF4 . Finally, Section SECREF5 concludes the paper and envisions on future work."
],
[
"In past decades, researches on CWS have a long history and various methods have been proposed BIBREF13 , BIBREF14 , BIBREF15 , which is an important task for Chinese NLP BIBREF7 . These methods are mainly focus on two categories: supervised learning and deep learning BIBREF2 .",
"Supervised Learning Methods. Initially, supervised learning methods were widely-used in CWS. Xue BIBREF13 employed a maximum entropy tagger to automatically assign Chinese characters. Zhao et al. BIBREF16 used a conditional random field for tag decoding and considered both feature template selection and tag set selection. However, these methods greatly rely on manual feature engineering BIBREF17 , while handcrafted features are difficult to design, and the size of these features is usually very large BIBREF6 .",
"Deep Learning Methods. Recently, neural networks have been applied in CWS tasks. To name a few, Zheng et al. BIBREF14 used deep layers of neural networks to learn feature representations of characters. Chen et al. BIBREF6 adopted LSTM to capture the previous important information. Chen et al. BIBREF18 proposed a gated recursive neural network (GRNN), which contains reset and update gates to incorporate the complicated combinations of characters. Jiang and Tang BIBREF19 proposed a sequence-to-sequence transformer model to avoid overfitting and capture character information at the distant site of a sentence. Yang et al. BIBREF20 investigated subword information for CWS and integrated subword embeddings into a Lattice LSTM (LaLSTM) network. However, general word segmentation models do not work well in specific field due to lack of annotated training data.",
"Currently, a handful of domain-specific CWS approaches have been studied, but they focused on decentralized domains. In the metallurgical field, Shao et al. BIBREF15 proposed a domain-specific CWS method based on Bi-LSTM model. In the medical field, Xing et al. BIBREF8 proposed an adaptive multi-task transfer learning framework to fully leverage domain-invariant knowledge from high resource domain to medical domain. Meanwhile, transfer learning still greatly focuses on the corpus in general domain. When it comes to the specific domain, large amounts of manually-annotated data is necessary. Active learning can solve this problem to a certain extent. However, due to the challenges faced by performing active learning on CWS, only a few studies have been conducted. On judgements, Yan et al. BIBREF21 adopted the local annotation strategy, which selects substrings around the informative characters in active learning. However, their method still stays at the statistical level. Unlike the above method, we propose an active learning approach for CWS in medical text, which combines information entropy with neural network to effectively reduce annotation cost."
],
[
"Active learning BIBREF22 mainly aims to ease the data collection process by automatically deciding which instances should be labeled by annotators to train a model as quickly and effectively as possible BIBREF23 . The sampling strategy plays a key role in active learning. In the past decade, the rapid development of active learning has resulted in various sampling strategies, such as uncertainty sampling BIBREF24 , query-by-committee BIBREF25 and information gain BIBREF26 . Currently, the most mainstream sampling strategy is uncertainty sampling. It focuses its selection on samples closest to the decision boundary of the classifier and then chooses these samples for annotators to relabel BIBREF27 .",
"The formal definition of uncertainty sampling is to select a sample INLINEFORM0 that maximizes the entropy INLINEFORM1 over the probability of predicted classes: DISPLAYFORM0 ",
"where INLINEFORM0 is a multi-dimensional feature vector, INLINEFORM1 is its binary label, and INLINEFORM2 is the predicted probability, through which a classifier trained on training sets can map features to labels. However, in some complicated tasks, such as CWS and NER, only considering the uncertainty of classifier is obviously not enough."
],
[
"Active learning methods can generally be described into two parts: a learning engine and a selection engine BIBREF28 . The learning engine is essentially a classifier, which is mainly used for training of classification problems. The selection engine is based on the sampling strategy, which chooses samples that need to be relabeled by annotators from unlabeled data. Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving the accuracy of the classifier. In this paper, a CRF-based segmenter and a scoring model are employed as learning engine and selection engine, respectively.",
"Fig. FIGREF7 and Algorithm SECREF3 demonstrate the procedure of CWS based on active learning. First, we train a CRF-based segmenter by train set. Then, the segmenter is employed to annotate the unlabeled set roughly. Subsequently, information entropy based scoring model picks INLINEFORM0 -lowest ranking samples for annotators to relabel. Meanwhile, the train sets and unlabeled sets are updated. Finally, we re-train the segmenter. The above steps iterate until the desired accuracy is achieved or the number of iterations has reached a predefined threshold. [!ht] Active Learning for Chinese Word Segmentation labeled data INLINEFORM1 , unlabeled data INLINEFORM2 , the number of iterations INLINEFORM3 , the number of samples selected per iteration INLINEFORM4 , partitioning function INLINEFORM5 , size INLINEFORM6 a word segmentation model INLINEFORM7 with the smallest test set loss INLINEFORM8 Initialize: INLINEFORM9 ",
" train a word segmenter INLINEFORM0 ",
" estimate the test set loss INLINEFORM0 ",
" label INLINEFORM0 by INLINEFORM1 ",
" INLINEFORM0 to INLINEFORM1 INLINEFORM2 compute INLINEFORM3 by branch information entropy based scoring model",
" select INLINEFORM0 -lowest ranking samples INLINEFORM1 ",
"relabel INLINEFORM0 by annotators",
"form a new labeled dataset INLINEFORM0 ",
"form a new unlabeled dataset INLINEFORM0 ",
"train a word segmenter INLINEFORM0 ",
"estimate the new test loss INLINEFORM0 ",
"compute the loss reduction INLINEFORM0 ",
" INLINEFORM0 INLINEFORM1 ",
" INLINEFORM0 ",
" INLINEFORM0 INLINEFORM1 with the smallest test set loss INLINEFORM2 INLINEFORM3 "
],
[
"CWS can be formalized as a sequence labeling problem with character position tags, which are (`B', `M', `E', `S'). So, we convert the labeled data into the `BMES' format, in which each character in the sequence is assigned into a label as follows one by one: B=beginning of a word, M=middle of a word, E=end of a word and S=single word.",
"In this paper, we use CRF as a training model for CWS task. Given the observed sequence, CRF has a single exponential model for the joint probability of the entire sequence of labels, while maximum entropy markov model (MEMM) BIBREF29 uses per-state exponential models for the conditional probabilities of next states BIBREF4 . Therefore, it can solve the label bias problem effectively. Compared with neural networks, it has less dependency on the corpus size.",
"First, we pre-process EHRs at the character-level, separating each character of raw EHRs. For instance, given a sentence INLINEFORM0 , where INLINEFORM1 represents the INLINEFORM2 -th character, the separated form is INLINEFORM3 . Then, we employ Word2Vec BIBREF30 to train pre-processed EHRs to get character embeddings. To capture interactions between adjacent characters, K-means clustering algorithm BIBREF31 is utilized to feature the coherence over characters. In general, K-means divides INLINEFORM4 EHR characters into INLINEFORM5 groups of clusters and the similarity of EHR characters in the same cluster is higher. With each iteration, K-means can classify EHR characters into the nearest cluster based on distance to the mean vector. Then, recalculating and adjusting the mean vectors of these clusters until the mean vector converges. K-means features explicitly show the difference between two adjacent characters and even multiple characters. Finally, we additionally add K-means clustering features to the input of CRF-based segmenter. The segmenter makes positional tagging decisions over individual characters. For example, a Chinese segmented sentence UTF8gkai“病人/长期/于/我院/肾病科/住院/治疗/。/\" (The patient was hospitalized for a long time in the nephrology department of our hospital.) is labeled as `BEBESBEBMEBEBES'."
],
[
"To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. Fig. FIGREF10 illustrates the entire scoring model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model.",
"We use gated neural network and information entropy to capture the likelihood of the segment being a legal word. The architecture of word score model is depicted in Fig. FIGREF12 .",
"Gated Combination Neural Network (GCNN)",
"To effectively learn word representations through character embeddings, we use GCNN BIBREF32 . The architecture of GCNN is demonstrated in Fig. FIGREF13 , which includes update gate and reset gate. The gated mechanism not only captures the characteristics of the characters themselves, but also utilizes the interaction between the characters. There are two types of gates in this network structure: reset gates and update gates. These two gated vectors determine the final output of the gated recurrent neural network, where the update gate helps the model determine what to be passed, and the reset gate primarily helps the model decide what to be cleared. In particular, the word embedding of a word with INLINEFORM0 characters can be computed as: DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are update gates for new combination vector INLINEFORM2 and the i-th character INLINEFORM3 respectively, the combination vector INLINEFORM4 is formalized as: DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are reset gates for characters.",
"Left and Right Branch Information Entropy In general, each string in a sentence may be a word. However, compared with a string which is not a word, the string of a word is significantly more independent. The branch information entropy is usually used to judge whether each character in a string is tightly linked through the statistical characteristics of the string, which reflects the likelihood of a string being a word. The left and right branch information entropy can be formalized as follows: DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 denotes the INLINEFORM1 -th candidate word, INLINEFORM2 denotes the character set, INLINEFORM3 denotes the probability that character INLINEFORM4 is on the left of word INLINEFORM5 and INLINEFORM6 denotes the probability that character INLINEFORM7 is on the right of word INLINEFORM8 . INLINEFORM9 and INLINEFORM10 respectively represent the left and right branch information entropy of the candidate word INLINEFORM11 . If the left and right branch information entropy of a candidate word is relatively high, the probability that the candidate word can be combined with the surrounded characters to form a word is low, thus the candidate word is likely to be a legal word.",
"To judge whether the candidate words in a segmented sentence are legal words, we compute the left and right entropy of each candidate word, then take average as the measurement standard: DISPLAYFORM0 ",
"We represent a segmented sentence with INLINEFORM0 candidate words as [ INLINEFORM1 , INLINEFORM2 ,..., INLINEFORM3 ], so the INLINEFORM4 ( INLINEFORM5 ) of the INLINEFORM6 -th candidate word is computed by its average entropy: DISPLAYFORM0 ",
"In this paper, we use LSTM to capture the coherence between words in a segmented sentence. This neural network is mainly an optimization for traditional RNN. RNN is widely used to deal with time-series prediction problems. The result of its current hidden layer is determined by the input of the current layer and the output of the previous hidden layer BIBREF33 . Therefore, RNN can remember historical results. However, traditional RNN has problems of vanishing gradient and exploding gradient when training long sequences BIBREF34 . By adding a gated mechanism to RNN, LSTM effectively solves these problems, which motivates us to get the link score with LSTM. Formally, the LSTM unit performs the following operations at time step INLINEFORM0 : DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are the inputs of LSTM, all INLINEFORM3 and INLINEFORM4 are a set of parameter matrices to be trained, and INLINEFORM5 is a set of bias parameter matrices to be trained. INLINEFORM6 and INLINEFORM7 operation respectively represent matrix element-wise multiplication and sigmoid function. In the LSTM unit, there are two hidden layers ( INLINEFORM8 , INLINEFORM9 ), where INLINEFORM10 is the internal memory cell for dealing with vanishing gradient, while INLINEFORM11 is the main output of the LSTM unit for complex operations in subsequent layers.",
"We denotes INLINEFORM0 as the word embedding of time step INLINEFORM1 , a prediction INLINEFORM2 of next word embedding INLINEFORM3 can be computed by hidden layer INLINEFORM4 : DISPLAYFORM0 ",
"Therefore, link score of next word embedding INLINEFORM0 can be computed as: DISPLAYFORM0 ",
"Due to the structure of LSTM, vector INLINEFORM0 contains important information of entire segmentation decisions. In this way, the link score gets the result of the sequence-level word segmentation, not just word-level.",
"Intuitively, we can compute the score of a segmented sequence by summing up word scores and link scores. However, we find that a sequence with more candidate words tends to have higher sequence scores. Therefore, to alleviate the impact of the number of candidate words on sequence scores, we calculate final scores as follows: DISPLAYFORM0 ",
"where INLINEFORM0 denotes the INLINEFORM1 -th segmented sequence with INLINEFORM2 candidate words, and INLINEFORM3 represents the INLINEFORM4 -th candidate words in the segmented sequence.",
"When training the model, we seek to minimize the sequence score of the corrected segmented sentence and the predicted segmented sentence. DISPLAYFORM0 ",
"where INLINEFORM0 is the loss function."
],
[
"We collect 204 EHRs with cardiovascular diseases from the Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine and each contains 27 types of records. We choose 4 different types with a total of 3868 records from them, which are first course reports, medical records, chief ward round records and discharge records. The detailed information of EHRs are listed in Table TABREF32 .",
"We split our datasets as follows. First, we randomly select 3200 records from 3868 records as unlabeled set. Then, we manually annotate remaining 668 records as labeled set, which contains 1170 sentences. Finally, we divide labeled set into train set and test set with the ratio of 7:3 randomly. Statistics of datasets are listed in Table TABREF33 ."
],
[
"To determine suitable parameters, we divide training set into two sets, the first 80% sentences as training set and the rest 20% sentences as validation set.",
"Character embedding dimensions and K-means clusters are two main parameters in the CRF-based word segmenter.",
"In this paper, we choose character-based CRF without any features as baseline. First, we use Word2Vec to train character embeddings with dimensions of [`50', `100', `150', `200', `300', `400'] respectively, thus we obtain 6 different dimensional character embeddings. Second, these six types of character embeddings are used as the input to K-means algorithm with the number of clusters [`50', `100', `200', `300', `400', `500', `600'] respectively to capture the corresponding features of character embeddings. Then, we add K-means clustering features to baseline for training. As can be seen from Fig. FIGREF36 , when the character embedding dimension INLINEFORM0 = 150 and the number of clusters INLINEFORM1 = 400, CRF-based word segmenter performs best, so these two parameters are used in subsequent experiments.",
"Hyper-parameters of neural network have a great impact on the performance. The hyper-parameters we choose are listed in Table TABREF38 .",
"The dimension of character embeddings is set as same as the parameter used in CRF-based word segmenter and the number of hidden units is also set to be the same as it. Maximum word length is ralated to the number of parameters in GCNN unit. Since there are many long medical terminologies in EHRs, we set the maximum word length as 6. In addition, dropout is an effective way to prevent neural networks from overfitting BIBREF35 . To avoid overfitting, we drop the input layer of the scoring model with the rate of 20%."
],
[
"Our work experimentally compares two mainstream CWS tools (LTP and Jieba) on training and testing sets. These two tools are widely used and recognized due to their high INLINEFORM0 -score of word segmentation in general fields. However, in specific fields, there are many terminologies and uncommon words, which lead to the unsatisfactory performance of segmentation results. To solve the problem of word segmentation in specific fields, these two tools provide a custom dictionary for users. In the experiments, we also conduct a comparative experiment on whether external domain dictionary has an effect on the experimental results. We manually construct the dictionary when labeling EHRs.",
"From the results in Table TABREF41 , we find that Jieba benefits a lot from the external dictionary. However, the Recall of LTP decreases when joining the domain dictionary. Generally speaking, since these two tools are trained by general domain corpus, the results are not ideal enough to cater to the needs of subsequent NLP of EHRs when applied to specific fields.",
"To investigate the effectiveness of K-means features in CRF-based segmenter, we also compare K-means with 3 different clustering features, including MeanShift BIBREF36 , SpectralClustering BIBREF37 and DBSCAN BIBREF38 on training and testing sets. From the results in Table TABREF43 , by adding additional clustering features in CRF-based segmenter, there is a significant improvement of INLINEFORM0 -score, which indicates that clustering features can effectively capture the semantic coherence between characters. Among these clustering features, K-means performs best, so we utlize K-means results as additional features for CRF-based segmenter.",
"In this experiment, since uncertainty sampling is the most popular strategy in real applications for its simpleness and effectiveness BIBREF27 , we compare our proposed strategy with uncertainty sampling in active learning. We conduct our experiments as follows. First, we employ CRF-based segmenter to annotate the unlabeled set. Then, sampling strategy in active learning selects a part of samples for annotators to relabel. Finally, the relabeled samples are added to train set for segmenter to re-train. Our proposed scoring strategy selects samples according to the sequence scores of the segmented sentences, while uncertainty sampling suggests relabeling samples that are closest to the segmenter’s decision boundary.",
"Generally, two main parameters in active learning are the numbers of iterations and samples selected per iteration. To fairly investigate the influence of two parameters, we compare our proposed strategy with uncertainty sampling on the same parameter. We find that though the number of iterations is large enough, it has a limited impact on the performance of segmenter. Therefore, we choose 30 as the number of iterations, which is a good trade-off between speed and performance. As for the number of samples selected per iteration, there are 6078 sentences in unlabeled set, considering the high cost of relabeling, we set four sizes of samples selected per iteration, which are 2%, 5%, 8% and 11%.",
"The experimental results of two sampling strategies with 30 iterations on four different proportions of relabeled data are shown in Fig. FIGREF45 , where x-axis represents the number of iterations and y-axis denotes the INLINEFORM0 -score of the segmenter. Scoring strategy shows consistent improvements over uncertainty sampling in the early iterations, indicating that scoring strategy is more capable of selecting representative samples.",
"Furthermore, we also investigate the relations between the best INLINEFORM0 -score and corresponding number of iteration on two sampling strategies, which is depicted in Fig. FIGREF46 .",
"It is observed that in our proposed scoring model, with the proportion of relabeled data increasing, the iteration number of reaching the optimal word segmentation result is decreasing, but the INLINEFORM0 -score of CRF-based word segmenter is also gradually decreasing. When the proportion is 2%, the segmenter reaches the highest INLINEFORM1 -score: 90.62%. Obviously, our proposed strategy outperforms uncertainty sampling by a large margin. Our proposed method needs only 2% relabeled samples to obtain INLINEFORM2 -score of 90.62%, while uncertainty sampling requires 8% samples to reach its best INLINEFORM3 -score of 88.98%, which indicates that with our proposed method, we only need to manually relabel a small number of samples to achieve a desired segmentation result."
],
[
"To relieve the efforts of EHRs annotation, we propose an effective word segmentation method based on active learning, in which the sampling strategy is a scoring model combining information entropy with neural network. Compared with the mainstream uncertainty sampling, our strategy selects samples from statistical perspective and deep learning level. In addition, to capture coherence between characters, we add K-means clustering features to CRF-based word segmenter. Based on EHRs collected from the Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine, we evaluate our method on CWS task. Compared with uncertainty sampling, our method requires 6% less relabeled samples to achieve better performance, which proves that our method can save the cost of manual annotation to a certain extent.",
"In future, we plan to employ other widely-used deep neural networks, such as convolutional neural network and attention mechanism, in the research of EHRs segmentation. Then, we believe that our method can be applied to other tasks as well, so we will fully investigate the application of our method in other tasks, such as NER and relation extraction."
],
[
"The authors would like to appreciate any suggestions or comments from the anonymous reviewers. This work was supported by the National Natural Science Foundation of China (No. 61772201) and the National Key R&D Program of China for “Precision medical research\" (No. 2018YFC0910550)."
]
]
} | {
"question": [
"How does the scoring model work?",
"How does the active learning model work?",
"Which neural network architectures are employed?"
],
"question_id": [
"3c3cb51093b5fd163e87a773a857496a4ae71f03",
"53a0763eff99a8148585ac642705637874be69d4",
"0bfed6f9cfe93617c5195c848583e3945f2002ff"
],
"nlp_background": [
"two",
"two",
"two"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"word segmentation",
"word segmentation",
"word segmentation"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. Fig. FIGREF10 illustrates the entire scoring model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model."
],
"highlighted_evidence": [
"To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. Fig. FIGREF10 illustrates the entire scoring model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model."
]
},
{
"unanswerable": false,
"extractive_spans": [
" the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. Fig. FIGREF10 illustrates the entire scoring model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model."
],
"highlighted_evidence": [
"The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. "
]
}
],
"annotation_id": [
"7f52a42b5c714e3a236ad19e17d6118d7150020d",
"dfd42925ad6801aefc716d18331afc2671840e52"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Active learning methods has a learning engine (mainly used for training of classification problems) and the selection engine (which chooses samples that need to be relabeled by annotators from unlabeled data). Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving the accuracy of the classifier. In this paper, CRF-based segmenter and a scoring model are employed as learning engine and selection engine, respectively.",
"evidence": [
"Active learning methods can generally be described into two parts: a learning engine and a selection engine BIBREF28 . The learning engine is essentially a classifier, which is mainly used for training of classification problems. The selection engine is based on the sampling strategy, which chooses samples that need to be relabeled by annotators from unlabeled data. Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving the accuracy of the classifier. In this paper, a CRF-based segmenter and a scoring model are employed as learning engine and selection engine, respectively."
],
"highlighted_evidence": [
"Active learning methods can generally be described into two parts: a learning engine and a selection engine BIBREF28 . The learning engine is essentially a classifier, which is mainly used for training of classification problems. The selection engine is based on the sampling strategy, which chooses samples that need to be relabeled by annotators from unlabeled data. Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving the accuracy of the classifier. In this paper, a CRF-based segmenter and a scoring model are employed as learning engine and selection engine, respectively."
]
}
],
"annotation_id": [
"589355ec9f709793c89446fbfa5eba29dcd02fa5"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"gated neural network "
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. Fig. FIGREF10 illustrates the entire scoring model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model."
],
"highlighted_evidence": [
"A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model."
]
}
],
"annotation_id": [
"91d6990deb8ffb2a24a890eea56dd15de40b3546"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
} | {
"caption": [
"Fig. 1. The diagram of active learning for the Chinese word segmentation.",
"Fig. 2. The architecture of the information entropy based scoring model, where ‘/’ represents candidate word separator, xi represents the one-hot encoding of the i-th character, cj represents the j-th character embedding learned by Word2Vec, wm represents the distributed representation of the mth candidate word and pn represents the prediction of the (n+1)-th candidate word.",
"Fig. 3. The architecture of word score, where ‘/’ represents candidate word separator, ci represents the i-th character embedding, wj represents the j-th candidate word embedding and ScoreWord(wk) represents the word score of the k-th candidate word.",
"Fig. 4. The architecture of GCNN.",
"TABLE I DETAILED INFORMATION OF EHRS",
"TABLE III HYPER-PARAMETER SETTING.",
"TABLE IV EXPERIMENTAL RESULTS WITH DIFFERENT WORD SEGMENTATION TOOLS.",
"Fig. 5. The relation between F1-score and K-means class with different character embedding dimensions.",
"TABLE II STATISTICS OF DATASETS",
"TABLE V COMPARISON WITH DIFFERENT CLUSTERING FEATURES.",
"Fig. 7. The relations between the best F1-score and corresponding iteration on two sampling strategies with different relabeled sample sizes.",
"Fig. 6. The results of two sampling strategies with different relabeled sample sizes."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"4-Figure3-1.png",
"4-Figure4-1.png",
"5-TableI-1.png",
"6-TableIII-1.png",
"6-TableIV-1.png",
"6-Figure5-1.png",
"6-TableII-1.png",
"7-TableV-1.png",
"7-Figure7-1.png",
"7-Figure6-1.png"
]
} |
1703.05260 | InScript: Narrative texts annotated with script information | This paper presents the InScript corpus (Narrative Texts Instantiating Script structure). InScript is a corpus of 1,000 stories centered around 10 different scenarios. Verbs and noun phrases are annotated with event and participant types, respectively. Additionally, the text is annotated with coreference information. The corpus shows rich lexical variation and will serve as a unique resource for the study of the role of script knowledge in natural language processing. | {
"section_name": [
"Motivation",
"Collection via Amazon M-Turk",
"Data Statistics",
"Annotation",
"Annotation Schema",
"Development of the Schema",
"First Annotation Phase",
"Modification of the Schema",
"Special Cases",
"Inter-Annotator Agreement",
"Annotated Corpus Statistics",
"Comparison to the DeScript Corpus",
"Conclusion",
"Acknowledgements"
],
"paragraphs": [
[
"A script is “a standardized sequence of events that describes some stereotypical human activity such as going to a restaurant or visiting a doctor” BIBREF0 . Script events describe an action/activity along with the involved participants. For example, in the script describing a visit to a restaurant, typical events are entering the restaurant, ordering food or eating. Participants in this scenario can include animate objects like the waiter and the customer, as well as inanimate objects such as cutlery or food.",
"Script knowledge has been shown to play an important role in text understanding (cullingford1978script, miikkulainen1995script, mueller2004understanding, Chambers2008, Chambers2009, modi2014inducing, rudinger2015learning). It guides the expectation of the reader, supports coreference resolution as well as common-sense knowledge inference and enables the appropriate embedding of the current sentence into the larger context. Figure 1 shows the first few sentences of a story describing the scenario taking a bath. Once the taking a bath scenario is evoked by the noun phrase (NP) “a bath”, the reader can effortlessly interpret the definite NP “the faucet” as an implicitly present standard participant of the taking a bath script. Although in this story, “entering the bath room”, “turning on the water” and “filling the tub” are explicitly mentioned, a reader could nevertheless have inferred the “turning on the water” event, even if it was not explicitly mentioned in the text. Table 1 gives an example of typical events and participants for the script describing the scenario taking a bath.",
"A systematic study of the influence of script knowledge in texts is far from trivial. Typically, text documents (e.g. narrative texts) describing various scenarios evoke many different scripts, making it difficult to study the effect of a single script. Efforts have been made to collect scenario-specific script knowledge via crowdsourcing, for example the OMICS and SMILE corpora (singh2002open, Regneri:2010, Regneri2013), but these corpora describe script events in a pointwise telegram style rather than in full texts.",
"This paper presents the InScript corpus (Narrative Texts Instantiating Script structure). It is a corpus of simple narrative texts in the form of stories, wherein each story is centered around a specific scenario. The stories have been collected via Amazon Mechanical Turk (M-Turk). In this experiment, turkers were asked to write down a concrete experience about a bus ride, a grocery shopping event etc. We concentrated on 10 scenarios and collected 100 stories per scenario, giving a total of 1,000 stories with about 200,000 words. Relevant verbs and noun phrases in all stories are annotated with event types and participant types respectively. Additionally, the texts have been annotated with coreference information in order to facilitate the study of the interdependence between script structure and coreference.",
"The InScript corpus is a unique resource that provides a basis for studying various aspects of the role of script knowledge in language processing by humans. The acquisition of this corpus is part of a larger research effort that aims at using script knowledge to model the surprisal and information density in written text. Besides InScript, this project also released a corpus of generic descriptions of script activities called DeScript (for Describing Script Structure, Wanzare2016). DeScript contains a range of short and textually simple phrases that describe script events in the style of OMICS or SMILE (singh2002open, Regneri:2010). These generic telegram-style descriptions are called Event Descriptions (EDs); a sequence of such descriptions that cover a complete script is called an Event Sequence Description (ESD). Figure 2 shows an excerpt of a script in the baking a cake scenario. The figure shows event descriptions for 3 different events in the DeScript corpus (left) and fragments of a story in the InScript corpus (right) that instantiate the same event type."
],
[
"We selected 10 scenarios from different available scenario lists (e.g. Regneri:2010 , VanDerMeer2009, and the OMICS corpus BIBREF1 ), including scripts of different complexity (Taking a bath vs. Flying in an airplane) and specificity (Riding a public bus vs. Repairing a flat bicycle tire). For the full scenario list see Table 2 .",
"Texts were collected via the Amazon Mechanical Turk platform, which provides an opportunity to present an online task to humans (a.k.a. turkers). In order to gauge the effect of different M-Turk instructions on our task, we first conducted pilot experiments with different variants of instructions explaining the task. We finalized the instructions for the full data collection, asking the turkers to describe a scenario in form of a story as if explaining it to a child and to use a minimum of 150 words. The selected instruction variant resulted in comparably simple and explicit scenario-related stories. In the future we plan to collect more complex stories using different instructions. In total 190 turkers participated. All turkers were living in the USA and native speakers of English. We paid USD $0.50 per story to each turker. On average, the turkers took 9.37 minutes per story with a maximum duration of 17.38 minutes."
],
[
"Statistics for the corpus are given in Table 2 . On average, each story has a length of 12 sentences and 217 words with 98 word types on average. Stories are coherent and concentrate mainly on the corresponding scenario. Neglecting auxiliaries, modals and copulas, on average each story has 32 verbs, out of which 58% denote events related to the respective scenario. As can be seen in Table 2 , there is some variation in stories across scenarios: The flying in an airplane scenario, for example, is most complex in terms of the number of sentences, tokens and word types that are used. This is probably due to the inherent complexity of the scenario: Taking a flight, for example, is more complicated and takes more steps than taking a bath. The average count of sentences, tokens and types is also very high for the baking a cake scenario. Stories from the scenario often resemble cake recipes, which usually contain very detailed steps, so people tend to give more detailed descriptions in the stories.",
"For both flying in an airplane and baking a cake, the standard deviation is higher in comparison to other scenarios. This indicates that different turkers described the scenario with a varying degree of detail and can also be seen as an indicator for the complexity of both scenarios. In general, different people tend to describe situations subjectively, with a varying degree of detail. In contrast, texts from the taking a bath and planting a tree scenarios contain a relatively smaller number of sentences and fewer word types and tokens. Both planting a tree and taking a bath are simpler activities, which results in generally less complex texts.",
"The average pairwise word type overlap can be seen as a measure of lexical variety among stories: If it is high, the stories resemble each other more. We can see that stories in the flying in an airplane and baking a cake scenarios have the highest values here, indicating that most turkers used a similar vocabulary in their stories.",
"In general, the response quality was good. We had to discard 9% of the stories as these lacked the quality we were expecting. In total, we selected 910 stories for annotation."
],
[
"This section deals with the annotation of the data. We first describe the final annotation schema. Then, we describe the iterative process of corpus annotation and the refinement of the schema. This refinement was necessary due to the complexity of the annotation."
],
[
"For each of the scenarios, we designed a specific annotation template. A script template consists of scenario-specific event and participant labels. An example of a template is shown in Table 1 . All NP heads in the corpus were annotated with a participant label; all verbs were annotated with an event label. For both participants and events, we also offered the label unclear if the annotator could not assign another label. We additionally annotated coreference chains between NPs. Thus, the process resulted in three layers of annotation: event types, participant types and coreference annotation. These are described in detail below.",
"As a first layer, we annotated event types. There are two kinds of event type labels, scenario-specific event type labels and general labels. The general labels are used across every scenario and mark general features, for example whether an event belongs to the scenario at all. For the scenario-specific labels, we designed an unique template for every scenario, with a list of script-relevant event types that were used as labels. Such labels include for example ScrEv_close_drain in taking a bath as in Example UID10 (see Figure 1 for a complete list for the taking a bath scenario)",
"I start by closing $_{\\textsc {\\scriptsize ScrEv\\_close\\_drain}}$ the drain at the bottom of the tub.",
"The general labels that were used in addition to the script-specific labels in every scenario are listed below:",
"ScrEv_other. An event that belongs to the scenario, but its event type occurs too infrequently (for details, see below, Section \"Modification of the Schema\" ). We used the label “other\" because event classification would become too finegrained otherwise.",
"Example: After I am dried I put my new clothes on and clean up $_{\\textsc {\\scriptsize ScrEv\\_other}}$ the bathroom.",
"RelNScrEv. Related non-script event. An event that can plausibly happen during the execution of the script and is related to it, but that is not part of the script.",
"Example: After finding on what I wanted to wear, I went into the bathroom and shut $_{\\textsc {\\scriptsize RelNScrEv}}$ the door.",
"UnrelEv. An event that is unrelated to the script.",
"Example: I sank into the bubbles and took $_{\\textsc {\\scriptsize UnrelEv}}$ a deep breath.",
"Additionally, the annotators were asked to annotate verbs and phrases that evoke the script without explicitly referring to a script event with the label Evoking, as shown in Example UID10 . Today I took a bath $_{\\textsc {\\scriptsize Evoking}}$ in my new apartment.",
"As in the case of the event type labels, there are two kinds of participant labels: general labels and scenario-specific labels. The latter are part of the scenario-specific templates, e.g. ScrPart_drain in the taking a bath scenario, as can be seen in Example UID15 .",
"I start by closing the drain $_{\\textsc {\\scriptsize ScrPart\\_drain}}$ at the bottom of the tub.",
"The general labels that are used across all scenarios mark noun phrases with scenario-independent features. There are the following general labels:",
"ScrPart_other. A participant that belongs to the scenario, but its participant type occurs only infrequently.",
"Example: I find my bath mat $_{\\textsc {\\scriptsize ScrPart\\_other}}$ and lay it on the floor to keep the floor dry.",
"NPart. Non-participant. A referential NP that does not belong to the scenario.",
"Example: I washed myself carefully because I did not want to spill water onto the floor $_{\\textsc {\\scriptsize NPart}}$ .labeled",
"SuppVComp. A support verb complement. For further discussion of this label, see Section \"Special Cases\" ",
"Example: I sank into the bubbles and took a deep breath $_{\\textsc {\\scriptsize SuppVComp}}$ .",
"Head_of_Partitive. The head of a partitive or a partitive-like construction. For a further discussion of this label cf. Section \"Special Cases\" ",
"Example: I grabbed a bar $_{\\textsc {\\scriptsize Head\\_of\\_Partitive}}$ of soap and lathered my body.",
"No_label. A non-referential noun phrase that cannot be labeled with another label. Example: I sat for a moment $_{\\textsc {\\scriptsize No\\_label}}$ , relaxing, allowing the warm water to sooth my skin.",
"All NPs labeled with one of the labels SuppVComp, Head_of_Partitive or No_label are considered to be non-referential. No_label is used mainly in four cases in our data: non-referential time expressions (in a while, a million times better), idioms (no matter what), the non-referential “it” (it felt amazing, it is better) and other abstracta (a lot better, a little bit).",
"In the first annotation phase, annotators were asked to mark verbs and noun phrases that have an event or participant type, that is not listed in the template, as MissScrEv/ MissScrPart (missing script event or participant, resp.). These annotations were used as a basis for extending the templates (see Section \"Modification of the Schema\" ) and replaced later by newly introduced labels or ScrEv_other and ScrPart_other respectively.",
"All noun phrases were annotated with coreference information indicating which entities denote the same discourse referent. The annotation was done by linking heads of NPs (see Example UID21 , where the links are indicated by coindexing). As a rule, we assume that each element of a coreference chain is marked with the same participant type label.",
"I $ _{\\textsc {\\scriptsize Coref1}}$ washed my $ _{\\textsc {\\scriptsize Coref1}}$ entire body $ _{\\textsc {\\scriptsize Coref2}}$ , starting with my $ _{\\textsc {\\scriptsize Coref1}}$ face $ _{\\textsc {\\scriptsize Coref3}} $ and ending with the toes $ _{\\textsc {\\scriptsize Coref4}} $ . I $ _{\\textsc {\\scriptsize Coref1}}$ always wash my $ _{\\textsc {\\scriptsize Coref1}}$ toes $_{\\textsc {\\scriptsize Coref4}}$ very thoroughly ...",
"The assignment of an entity to a referent is not always trivial, as is shown in Example UID21 . There are some cases in which two discourse referents are grouped in a plural NP. In the example, those things refers to the group made up of shampoo, soap and sponge. In this case, we asked annotators to introduce a new coreference label, the name of which indicates which referents are grouped together (Coref_group_washing_tools). All NPs are then connected to the group phrase, resulting in an additional coreference chain.",
"I $ _{\\textsc {\\scriptsize Coref1}}$ made sure that I $ _{\\textsc {\\scriptsize Coref1}}$ have my $ _{\\textsc {\\scriptsize Coref1}}$ shampoo $ _{\\textsc {\\scriptsize Coref2 + Coref\\_group\\_washing\\_tools}}$ , soap $_{\\textsc {\\scriptsize Coref3 + Coref\\_group\\_washing\\_tools}}$ and sponge $ _{\\textsc {\\scriptsize Coref4 + Coref\\_group\\_washing\\_tools}}$ ready to get in. Once I $ _{\\textsc {\\scriptsize Coref1}}$ have those things $ _{\\textsc {\\scriptsize Coref\\_group\\_washing\\_tools}}$ I $ _{\\textsc {\\scriptsize Coref1}}$ sink into the bath. ... I $ _{\\textsc {\\scriptsize Coref1}}$ applied some soap $ _{\\textsc {\\scriptsize Coref1}}$0 on my $ _{\\textsc {\\scriptsize Coref1}}$1 body and used the sponge $ _{\\textsc {\\scriptsize Coref1}}$2 to scrub a bit. ... I $ _{\\textsc {\\scriptsize Coref1}}$3 rinsed the shampoo $ _{\\textsc {\\scriptsize Coref1}}$4 . Example UID21 thus contains the following coreference chains: Coref1: I $ _{\\textsc {\\scriptsize Coref1}}$5 I $ _{\\textsc {\\scriptsize Coref1}}$6 my $ _{\\textsc {\\scriptsize Coref1}}$7 I $ _{\\textsc {\\scriptsize Coref1}}$8 I $ _{\\textsc {\\scriptsize Coref1}}$9 I $ _{\\textsc {\\scriptsize Coref1}}$0 my $ _{\\textsc {\\scriptsize Coref1}}$1 I",
"Coref2: shampoo $\\rightarrow $ shampoo",
"Coref3: soap $\\rightarrow $ soap",
"Coref4: sponge $\\rightarrow $ sponge",
"Coref_group_washing_ tools: shampoo $\\rightarrow $ soap $\\rightarrow $ sponge $\\rightarrow $ things"
],
[
"The templates were carefully designed in an iterated process. For each scenario, one of the authors of this paper provided a preliminary version of the template based on the inspection of some of the stories. For a subset of the scenarios, preliminary templates developed at our department for a psycholinguistic experiment on script knowledge were used as a starting point. Subsequently, the authors manually annotated 5 randomly selected texts for each of the scenarios based on the preliminary template. Necessary extensions and changes in the templates were discussed and agreed upon. Most of the cases of disagreement were related to the granularity of the event and participant types. We agreed on the script-specific functional equivalence as a guiding principle. For example, reading a book, listening to music and having a conversation are subsumed under the same event label in the flight scenario, because they have the common function of in-flight entertainment in the scenario. In contrast, we assumed different labels for the cake tin and other utensils (bowls etc.), since they have different functions in the baking a cake scenario and accordingly occur with different script events.",
"Note that scripts and templates as such are not meant to describe an activity as exhaustively as possible and to mention all steps that are logically necessary. Instead, scripts describe cognitively prominent events in an activity. An example can be found in the flight scenario. While more than a third of the turkers mentioned the event of fastening the seat belts in the plane (buckle_seat_belt), no person wrote about undoing their seat belts again, although in reality both events appear equally often. Consequently, we added an event type label for buckling up, but no label for undoing the seat belts."
],
[
"We used the WebAnno annotation tool BIBREF2 for our project. The stories from each scenario were distributed among four different annotators. In a calibration phase, annotators were presented with some sample texts for test annotations; the results were discussed with the authors. Throughout the whole annotation phase, annotators could discuss any emerging issues with the authors. All annotations were done by undergraduate students of computational linguistics. The annotation was rather time-consuming due to the complexity of the task, and thus we decided for single annotation mode. To assess annotation quality, a small sample of texts was annotated by all four annotators and their inter-annotator agreement was measured (see Section \"Inter-Annotator Agreement\" ). It was found to be sufficiently high.",
"Annotation of the corpus together with some pre- and post-processing of the data required about 500 hours of work. All stories were annotated with event and participant types (a total of 12,188 and 43,946 instances, respectively). On average there were 7 coreference chains per story with an average length of 6 tokens."
],
[
"After the first annotation round, we extended and changed the templates based on the results. As mentioned before, we used MissScrEv and MissScrPart labels to mark verbs and noun phrases instantiating events and participants for which no appropriate labels were available in the templates. Based on the instances with these labels (a total of 941 and 1717 instances, respectively), we extended the guidelines to cover the sufficiently frequent cases. In order to include new labels for event and participant types, we tried to estimate the number of instances that would fall under a certain label. We added new labels according to the following conditions:",
"For the participant annotations, we added new labels for types that we expected to appear at least 10 times in total in at least 5 different stories (i.e. in approximately 5% of the stories).",
"For the event annotations, we chose those new labels for event types that would appear in at least 5 different stories.",
"In order to avoid too fine a granularity of the templates, all other instances of MissScrEv and MissScrPart were re-labeled with ScrEv_other and ScrPart_other. We also relabeled participants and events from the first annotation phase with ScrEv_other and ScrPart_other, if they did not meet the frequency requirements. The event label air_bathroom (the event of letting fresh air into the room after the bath), for example, was only used once in the stories, so we relabeled that instance to ScrEv_other.",
"Additionally, we looked at the DeScript corpus BIBREF3 , which contains manually clustered event paraphrase sets for the 10 scenarios that are also covered by InScript (see Section \"Comparison to the DeScript Corpus\" ). Every such set contains event descriptions that describe a certain event type. We extended our templates with additional labels for these events, if they were not yet part of the template."
],
[
"Noun-noun compounds were annotated twice with the same label (whole span plus the head noun), as indicated by Example UID31 . This redundant double annotation is motivated by potential processing requirements.",
"I get my (wash (cloth $ _{\\textsc {\\scriptsize ScrPart\\_washing\\_tools}} ))$ , $_{\\textsc {\\scriptsize ScrPart\\_washing\\_tools}} $ and put it under the water.",
"A special treatment was given to support verb constructions such as take time, get home or take a seat in Example UID32 . The semantics of the verb itself is highly underspecified in such constructions; the event type is largely dependent on the object NP. As shown in Example UID32 , we annotate the head verb with the event type described by the whole construction and label its object with SuppVComp (support verb complement), indicating that it does not have a proper reference.",
"I step into the tub and take $ _{\\textsc {\\scriptsize ScrEv\\_sink\\_water}} $ a seat $ _{\\textsc {\\scriptsize SuppVComp}} $ .",
"We used the Head_of_Partitive label for the heads in partitive constructions, assuming that the only referential part of the construction is the complement. This is not completely correct, since different partitive heads vary in their degree of concreteness (cf. Examples UID33 and UID33 ), but we did not see a way to make the distinction sufficiently transparent to the annotators. Our seats were at the back $ _{\\textsc {\\scriptsize Head\\_of\\_Partitive}} $ of the train $ _{\\textsc {\\scriptsize ScrPart\\_train}} $ . In the library you can always find a couple $ _{\\textsc {\\scriptsize Head\\_of\\_Partitive}} $ of interesting books $ _{\\textsc {\\scriptsize ScrPart\\_book}} $ .",
"Group denoting NPs sometimes refer to groups whose members are instances of different participant types. In Example UID34 , the first-person plural pronoun refers to the group consisting of the passenger (I) and a non-participant (my friend). To avoid a proliferation of event type labels, we labeled these cases with Unclear.",
"I $ _{\\textsc {\\scriptsize {ScrPart\\_passenger}}}$ wanted to visit my $_{\\textsc {\\scriptsize {ScrPart\\_passenger}}}$ friend $ _{\\textsc {\\scriptsize {NPart}}}$ in New York. ... We $_{\\textsc {\\scriptsize Unclear}}$ met at the train station.",
"We made an exception for the Getting a Haircut scenario, where the mixed participant group consisting of the hairdresser and the customer occurs very often, as in Example UID34 . Here, we introduced the additional ad-hoc participant label Scr_Part_hairdresser_customer.",
"While Susan $_{\\textsc {\\scriptsize {ScrPart\\_hairdresser}}}$ is cutting my $_{\\textsc {\\scriptsize {ScrPart\\_customer}}}$ hair we $_{\\textsc {\\scriptsize Scr\\_Part\\_hairdresser\\_customer}}$ usually talk a bit."
],
[
"In order to calculate inter-annotator agreement, a total of 30 stories from 6 scenarios were randomly chosen for parallel annotation by all 4 annotators after the first annotation phase. We checked the agreement on these data using Fleiss' Kappa BIBREF4 . The results are shown in Figure 4 and indicate moderate to substantial agreement BIBREF5 . Interestingly, if we calculated the Kappa only on the subset of cases that were annotated with script-specific event and participant labels by all annotators, results were better than those of the evaluation on all labeled instances (including also unrelated and related non-script events). This indicates one of the challenges of the annotation task: In many cases it is difficult to decide whether a particular event should be considered a central script event, or an event loosely related or unrelated to the script.",
"For coreference chain annotation, we calculated the percentage of pairs which were annotated by at least 3 annotators (qualified majority vote) compared to the set of those pairs annotated by at least one person (see Figure 4 ). We take the result of 90.5% between annotators to be a good agreement."
],
[
"Figure 5 gives an overview of the number of event and participant types provided in the templates. Taking a flight and getting a haircut stand out with a large number of both event and participant types, which is due to the inherent complexity of the scenarios. In contrast, planting a tree and going on a train contain the fewest labels. There are 19 event and participant types on average.",
"Figure 6 presents overview statistics about the usage of event labels, participant labels and coreference chain annotations. As can be seen, there are usually many more mentions of participants than events. For coreference chains, there are some chains that are really long (which also results in a large scenario-wise standard deviation). Usually, these chains describe the protagonist.",
"We also found again that the flying in an airplane scenario stands out in terms of participant mentions, event mentions and average number of coreference chains.",
"Figure 7 shows for every participant label in the baking a cake scenario the number of stories which they occurred in. This indicates how relevant a participant is for the script. As can be seen, a small number of participants are highly prominent: cook, ingredients and cake are mentioned in every story. The fact that the protagonist appears most often consistently holds for all other scenarios, where the acting person appears in every story, and is mentioned most frequently.",
"Figure 8 shows the distribution of participant/event type labels over all appearances over all scenarios on average. The groups stand for the most frequently appearing label, the top 2 to 5 labels in terms of frequency and the top 6 to 10. ScrEv_other and ScrPart_other are shown separately. As can be seen, the most frequently used participant label (the protagonist) makes up about 40% of overall participant instances. The four labels that follow the protagonist in terms of frequency together appear in 37% of the cases. More than 2 out of 3 participants in total belong to one of only 5 labels.",
"In contrast, the distribution for events is more balanced. 14% of all event instances have the most prominent event type. ScrEv_other and ScrPart_other both appear as labels in at most 5% of all event and participant instantiations: The specific event and participant type labels in our templates cover by far most of the instances.",
"In Figure 9 , we grouped participants similarly into the first, the top 2-5 and top 6-10 most frequently appearing participant types. The figure shows for each of these groups the average frequency per story, and in the rightmost column the overall average. The results correspond to the findings from the last paragraph."
],
[
"As mentioned previously, the InScript corpus is part of a larger research project, in which also a corpus of a different kind, the DeScript corpus, was created. DeScript covers 40 scenarios, and also contains the 10 scenarios from InScript. This corpus contains texts that describe scripts on an abstract and generic level, while InScript contains instantiations of scripts in narrative texts. Script events in DeScript are described in a very simple, telegram-style language (see Figure 2 ). Since one of the long-term goals of the project is to align the InScript texts with the script structure given from DeScript, it is interesting to compare both resources.",
"The InScript corpus exhibits much more lexical variation than DeScript. Many approaches use the type-token ratio to measure this variance. However, this measure is known to be sensitive to text length (see e.g. Tweedie1998), which would result in very small values for InScript and relatively large ones for DeScript, given the large average difference of text lengths between the corpora. Instead, we decided to use the Measure of Textual Lexical Diversity (MTLD) (McCarthy2010, McCarthy2005), which is familiar in corpus linguistics. This metric measures the average number of tokens in a text that are needed to retain a type-token ratio above a certain threshold. If the MTLD for a text is high, many tokens are needed to lower the type-token ratio under the threshold, so the text is lexically diverse. In contrast, a low MTLD indicates that only a few words are needed to make the type-token ratio drop, so the lexical diversity is smaller. We use the threshold of 0.71, which is proposed by the authors as a well-proven value.",
"Figure 10 compares the lexical diversity of both resources. As can be seen, the InScript corpus with its narrative texts is generally much more diverse than the DeScript corpus with its short event descriptions, across all scenarios. For both resources, the flying in an airplane scenario is most diverse (as was also indicated above by the mean word type overlap). However, the difference in the variation of lexical variance of scenarios is larger for DeScript than for InScript. Thus, the properties of a scenario apparently influence the lexical variance of the event descriptions more than the variance of the narrative texts. We used entropy BIBREF6 over lemmas to measure the variance of lexical realizations for events. We excluded events for which there were less than 10 occurrences in DeScript or InScript. Since there is only an event annotation for 50 ESDs per scenario in DeScript, we randomly sampled 50 texts from InScript for computing the entropy to make the numbers more comparable.",
"Figure 11 shows as an example the entropy values for the event types in the going on a train scenario. As can be seen in the graph, the entropy for InScript is in general higher than for DeScript. In the stories, a wider variety of verbs is used to describe events. There are also large differences between events: While wait has a really low entropy, spend_time_train has an extremely high entropy value. This event type covers many different activities such as reading, sleeping etc."
],
[
"In this paper we described the InScript corpus of 1,000 narrative texts annotated with script structure and coreference information. We described the annotation process, various difficulties encountered during annotation and different remedies that were taken to overcome these. One of the future research goals of our project is also concerned with finding automatic methods for text-to-script mapping, i.e. for the alignment of text segments with script states. We consider InScript and DeScript together as a resource for studying this alignment. The corpus shows rich lexical variation and will serve as a unique resource for the study of the role of script knowledge in natural language processing."
],
[
"This research was funded by the German Research Foundation (DFG) as part of SFB 1102 'Information Density and Linguistic Encoding'."
]
]
} | {
"question": [
"What are the key points in the role of script knowledge that can be studied?",
"Did the annotators agreed and how much?",
"How many subjects have been used to create the annotations?"
],
"question_id": [
"352c081c93800df9654315e13a880d6387b91919",
"18fbf9c08075e3b696237d22473c463237d153f5",
"a37ef83ab6bcc6faff3c70a481f26174ccd40489"
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"697e318cbd3c0685caf6f8670044f74eeca2dd29"
],
"worker_id": [
"06fa905d7f2aaced6dc72e9511c71a2a51e8aead"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "For event types and participant types, there was a moderate to substantial level of agreement using the Fleiss' Kappa. For coreference chain annotation, there was average agreement of 90.5%.",
"evidence": [
"FLOAT SELECTED: Figure 4: Inter-annotator agreement statistics.",
"In order to calculate inter-annotator agreement, a total of 30 stories from 6 scenarios were randomly chosen for parallel annotation by all 4 annotators after the first annotation phase. We checked the agreement on these data using Fleiss' Kappa BIBREF4 . The results are shown in Figure 4 and indicate moderate to substantial agreement BIBREF5 . Interestingly, if we calculated the Kappa only on the subset of cases that were annotated with script-specific event and participant labels by all annotators, results were better than those of the evaluation on all labeled instances (including also unrelated and related non-script events). This indicates one of the challenges of the annotation task: In many cases it is difficult to decide whether a particular event should be considered a central script event, or an event loosely related or unrelated to the script.",
"For coreference chain annotation, we calculated the percentage of pairs which were annotated by at least 3 annotators (qualified majority vote) compared to the set of those pairs annotated by at least one person (see Figure 4 ). We take the result of 90.5% between annotators to be a good agreement."
],
"highlighted_evidence": [
"FLOAT SELECTED: Figure 4: Inter-annotator agreement statistics.",
" The results are shown in Figure 4 and indicate moderate to substantial agreement",
"For coreference chain annotation, we calculated the percentage of pairs which were annotated by at least 3 annotators (qualified majority vote) compared to the set of those pairs annotated by at least one person (see Figure 4 ). We take the result of 90.5% between annotators to be a good agreement."
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Moderate agreement of 0.64-0.68 Fleiss’ Kappa over event type labels, 0.77 Fleiss’ Kappa over participant labels, and good agreement of 90.5% over coreference information.",
"evidence": [
"In order to calculate inter-annotator agreement, a total of 30 stories from 6 scenarios were randomly chosen for parallel annotation by all 4 annotators after the first annotation phase. We checked the agreement on these data using Fleiss' Kappa BIBREF4 . The results are shown in Figure 4 and indicate moderate to substantial agreement BIBREF5 . Interestingly, if we calculated the Kappa only on the subset of cases that were annotated with script-specific event and participant labels by all annotators, results were better than those of the evaluation on all labeled instances (including also unrelated and related non-script events). This indicates one of the challenges of the annotation task: In many cases it is difficult to decide whether a particular event should be considered a central script event, or an event loosely related or unrelated to the script.",
"For coreference chain annotation, we calculated the percentage of pairs which were annotated by at least 3 annotators (qualified majority vote) compared to the set of those pairs annotated by at least one person (see Figure 4 ). We take the result of 90.5% between annotators to be a good agreement.",
"FLOAT SELECTED: Figure 4: Inter-annotator agreement statistics."
],
"highlighted_evidence": [
"The results are shown in Figure 4 and indicate moderate to substantial agreement BIBREF5 .",
"We take the result of 90.5% between annotators to be a good agreement.",
"FLOAT SELECTED: Figure 4: Inter-annotator agreement statistics."
]
}
],
"annotation_id": [
"fccbfbfd1cb203422c01866dd2ef25ff342de6d1",
"31f0262a036f427ffe0c75ba54ab33d723ed818d"
],
"worker_id": [
"06fa905d7f2aaced6dc72e9511c71a2a51e8aead",
"4857c606a55a83454e8d81ffe17e05cf8bc4b75f"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" four different annotators"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We used the WebAnno annotation tool BIBREF2 for our project. The stories from each scenario were distributed among four different annotators. In a calibration phase, annotators were presented with some sample texts for test annotations; the results were discussed with the authors. Throughout the whole annotation phase, annotators could discuss any emerging issues with the authors. All annotations were done by undergraduate students of computational linguistics. The annotation was rather time-consuming due to the complexity of the task, and thus we decided for single annotation mode. To assess annotation quality, a small sample of texts was annotated by all four annotators and their inter-annotator agreement was measured (see Section \"Inter-Annotator Agreement\" ). It was found to be sufficiently high."
],
"highlighted_evidence": [
"The stories from each scenario were distributed among four different annotators. "
]
}
],
"annotation_id": [
"f9ae2e4623e644564b6b0851573a5cd257eb2208"
],
"worker_id": [
"06fa905d7f2aaced6dc72e9511c71a2a51e8aead"
]
}
]
} | {
"caption": [
"Figure 1: An excerpt from a story on the TAKING A BATH script.",
"Figure 2: Connecting DeScript and InScript: an example from the BAKING A CAKE scenario (InScript participant annotation is omitted for better readability).",
"Table 1: Bath scenario template (labels added in the second phase of annotation are marked in bold).",
"Table 2: Corpus statistics for different scenarios (standard deviation given in parentheses). The maximum per column is highlighted in boldface, the minimum in boldface italics.",
"Figure 3: Sample event and participant annotation for the TAKING A BATH script.",
"Figure 4: Inter-annotator agreement statistics.",
"Figure 5: The number of participants and events in the templates.",
"Figure 6: Annotation statistics over all scenarios.",
"Figure 8: Distribution of participants (left) and events (right) for the 1, the top 2-5, top 6-10 most frequently appearing events/participants, SCREV/SCRPART OTHER and the rest.",
"Figure 9: Average number of participant mentions for a story, for the first, the top 2-5, top 6-10 most frequently appearing events/participants, and the overall average.",
"Figure 7: The number of stories in the BAKING A CAKE scenario that contain a certain participant label.",
"Figure 10: MTLD values for DeScript and InScript, per scenario.",
"Figure 11: Entropy over verb lemmas for events (left y-axis, H(x)) in the GOING ON A TRAIN SCENARIO. Bars in the background indicate the absolute number of occurrence of instances (right y-axis, N(x))."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"2-Table1-1.png",
"3-Table2-1.png",
"4-Figure3-1.png",
"6-Figure4-1.png",
"6-Figure5-1.png",
"6-Figure6-1.png",
"7-Figure8-1.png",
"7-Figure9-1.png",
"7-Figure7-1.png",
"8-Figure10-1.png",
"8-Figure11-1.png"
]
} |
1905.00563 | Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications | Representing entities and relations in an embedding space is a well-studied approach for machine learning on relational data. Existing approaches, however, primarily focus on improving accuracy and overlook other aspects such as robustness and interpretability. In this paper, we propose adversarial modifications for link prediction models: identifying the fact to add into or remove from the knowledge graph that changes the prediction for a target fact after the model is retrained. Using these single modifications of the graph, we identify the most influential fact for a predicted link and evaluate the sensitivity of the model to the addition of fake facts. We introduce an efficient approach to estimate the effect of such modifications by approximating the change in the embeddings when the knowledge graph changes. To avoid the combinatorial search over all possible facts, we train a network to decode embeddings to their corresponding graph components, allowing the use of gradient-based optimization to identify the adversarial modification. We use these techniques to evaluate the robustness of link prediction models (by measuring sensitivity to additional facts), study interpretability through the facts most responsible for predictions (by identifying the most influential neighbors), and detect incorrect facts in the knowledge base. | {
"section_name": [
"Introduction",
"Background and Notation",
"Completion Robustness and Interpretability via Adversarial Graph Edits ()",
"Removing a fact ()",
"Adding a new fact ()",
"Challenges",
"Efficiently Identifying the Modification",
"First-order Approximation of Influence",
"Continuous Optimization for Search",
"Experiments",
"Influence Function vs ",
"Robustness of Link Prediction Models",
"Interpretability of Models",
"Finding Errors in Knowledge Graphs",
"Related Work",
"Conclusions",
"Acknowledgements",
"Appendix",
"Modifications of the Form 〈s,r ' ,o ' 〉\\langle s, r^{\\prime }, o^{\\prime } \\rangle ",
"Modifications of the Form 〈s,r ' ,o〉\\langle s, r^{\\prime }, o \\rangle ",
"First-order Approximation of the Change For TransE",
"Sample Adversarial Attacks"
],
"paragraphs": [
[
"Knowledge graphs (KG) play a critical role in many real-world applications such as search, structured data management, recommendations, and question answering. Since KGs often suffer from incompleteness and noise in their facts (links), a number of recent techniques have proposed models that embed each entity and relation into a vector space, and use these embeddings to predict facts. These dense representation models for link prediction include tensor factorization BIBREF0 , BIBREF1 , BIBREF2 , algebraic operations BIBREF3 , BIBREF4 , BIBREF5 , multiple embeddings BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , and complex neural models BIBREF10 , BIBREF11 . However, there are only a few studies BIBREF12 , BIBREF13 that investigate the quality of the different KG models. There is a need to go beyond just the accuracy on link prediction, and instead focus on whether these representations are robust and stable, and what facts they make use of for their predictions. In this paper, our goal is to design approaches that minimally change the graph structure such that the prediction of a target fact changes the most after the embeddings are relearned, which we collectively call Completion Robustness and Interpretability via Adversarial Graph Edits (). First, we consider perturbations that red!50!blackremove a neighboring link for the target fact, thus identifying the most influential related fact, providing an explanation for the model's prediction. As an example, consider the excerpt from a KG in Figure 1 with two observed facts, and a target predicted fact that Princes Henriette is the parent of Violante Bavaria. Our proposed graph perturbation, shown in Figure 1 , identifies the existing fact that Ferdinal Maria is the father of Violante Bavaria as the one when removed and model retrained, will change the prediction of Princes Henriette's child. We also study attacks that green!50!blackadd a new, fake fact into the KG to evaluate the robustness and sensitivity of link prediction models to small additions to the graph. An example attack for the original graph in Figure 1 , is depicted in Figure 1 . Such perturbations to the the training data are from a family of adversarial modifications that have been applied to other machine learning tasks, known as poisoning BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 .",
"Since the setting is quite different from traditional adversarial attacks, search for link prediction adversaries brings up unique challenges. To find these minimal changes for a target link, we need to identify the fact that, when added into or removed from the graph, will have the biggest impact on the predicted score of the target fact. Unfortunately, computing this change in the score is expensive since it involves retraining the model to recompute the embeddings. We propose an efficient estimate of this score change by approximating the change in the embeddings using Taylor expansion. The other challenge in identifying adversarial modifications for link prediction, especially when considering addition of fake facts, is the combinatorial search space over possible facts, which is intractable to enumerate. We introduce an inverter of the original embedding model, to decode the embeddings to their corresponding graph components, making the search of facts tractable by performing efficient gradient-based continuous optimization. We evaluate our proposed methods through following experiments. First, on relatively small KGs, we show that our approximations are accurate compared to the true change in the score. Second, we show that our additive attacks can effectively reduce the performance of state of the art models BIBREF2 , BIBREF10 up to $27.3\\%$ and $50.7\\%$ in Hits@1 for two large KGs: WN18 and YAGO3-10. We also explore the utility of adversarial modifications in explaining the model predictions by presenting rule-like descriptions of the most influential neighbors. Finally, we use adversaries to detect errors in the KG, obtaining up to $55\\%$ accuracy in detecting errors."
],
[
"In this section, we briefly introduce some notations, and existing relational embedding approaches that model knowledge graph completion using dense vectors. In KGs, facts are represented using triples of subject, relation, and object, $\\langle s, r, o\\rangle $ , where $s,o\\in \\xi $ , the set of entities, and $r\\in $ , the set of relations. To model the KG, a scoring function $\\psi :\\xi \\times \\times \\xi \\rightarrow $ is learned to evaluate whether any given fact is true. In this work, we focus on multiplicative models of link prediction, specifically DistMult BIBREF2 because of its simplicity and popularity, and ConvE BIBREF10 because of its high accuracy. We can represent the scoring function of such methods as $\\psi (s,r,o) = , ) \\cdot $ , where $,,\\in ^d$ are embeddings of the subject, relation, and object respectively. In DistMult, $, ) = \\odot $ , where $\\odot $ is element-wise multiplication operator. Similarly, in ConvE, $, )$ is computed by a convolution on the concatenation of $$ and $s,o\\in \\xi $0 .",
"We use the same setup as BIBREF10 for training, i.e., incorporate binary cross-entropy loss over the triple scores. In particular, for subject-relation pairs $(s,r)$ in the training data $G$ , we use binary $y^{s,r}_o$ to represent negative and positive facts. Using the model's probability of truth as $\\sigma (\\psi (s,r,o))$ for $\\langle s,r,o\\rangle $ , the loss is defined as: (G) = (s,r)o ys,ro(((s,r,o)))",
"+ (1-ys,ro)(1 - ((s,r,o))). Gradient descent is used to learn the embeddings $,,$ , and the parameters of $, if any.\n$ "
],
[
"For adversarial modifications on KGs, we first define the space of possible modifications. For a target triple $\\langle s, r, o\\rangle $ , we constrain the possible triples that we can remove (or inject) to be in the form of $\\langle s^{\\prime }, r^{\\prime }, o\\rangle $ i.e $s^{\\prime }$ and $r^{\\prime }$ may be different from the target, but the object is not. We analyze other forms of modifications such as $\\langle s, r^{\\prime }, o^{\\prime }\\rangle $ and $\\langle s, r^{\\prime }, o\\rangle $ in appendices \"Modifications of the Form 〈s,r ' ,o ' 〉\\langle s, r^{\\prime }, o^{\\prime } \\rangle \" and \"Modifications of the Form 〈s,r ' ,o〉\\langle s, r^{\\prime }, o \\rangle \" , and leave empirical evaluation of these modifications for future work."
],
[
"For explaining a target prediction, we are interested in identifying the observed fact that has the most influence (according to the model) on the prediction. We define influence of an observed fact on the prediction as the change in the prediction score if the observed fact was not present when the embeddings were learned. Previous work have used this concept of influence similarly for several different tasks BIBREF19 , BIBREF20 . Formally, for the target triple ${s,r,o}$ and observed graph $G$ , we want to identify a neighboring triple ${s^{\\prime },r^{\\prime },o}\\in G$ such that the score $\\psi (s,r,o)$ when trained on $G$ and the score $\\overline{\\psi }(s,r,o)$ when trained on $G-\\lbrace {s^{\\prime },r^{\\prime },o}\\rbrace $ are maximally different, i.e. *argmax(s', r')Nei(o) (s',r')(s,r,o) where $\\Delta _{(s^{\\prime },r^{\\prime })}(s,r,o)=\\psi (s, r, o)-\\overline{\\psi }(s,r,o)$ , and $\\text{Nei}(o)=\\lbrace (s^{\\prime },r^{\\prime })|\\langle s^{\\prime },r^{\\prime },o \\rangle \\in G \\rbrace $ ."
],
[
"We are also interested in investigating the robustness of models, i.e., how sensitive are the predictions to small additions to the knowledge graph. Specifically, for a target prediction ${s,r,o}$ , we are interested in identifying a single fake fact ${s^{\\prime },r^{\\prime },o}$ that, when added to the knowledge graph $G$ , changes the prediction score $\\psi (s,r,o)$ the most. Using $\\overline{\\psi }(s,r,o)$ as the score after training on $G\\cup \\lbrace {s^{\\prime },r^{\\prime },o}\\rbrace $ , we define the adversary as: *argmax(s', r') (s',r')(s,r,o) where $\\Delta _{(s^{\\prime },r^{\\prime })}(s,r,o)=\\psi (s, r, o)-\\overline{\\psi }(s,r,o)$ . The search here is over any possible $s^{\\prime }\\in \\xi $ , which is often in the millions for most real-world KGs, and $r^{\\prime }\\in $ . We also identify adversaries that increase the prediction score for specific false triple, i.e., for a target fake fact ${s,r,o}$ , the adversary is ${s^{\\prime },r^{\\prime },o}$0 , where ${s^{\\prime },r^{\\prime },o}$1 is defined as before."
],
[
"There are a number of crucial challenges when conducting such adversarial attack on KGs. First, evaluating the effect of changing the KG on the score of the target fact ( $\\overline{\\psi }(s,r,o)$ ) is expensive since we need to update the embeddings by retraining the model on the new graph; a very time-consuming process that is at least linear in the size of $G$ . Second, since there are many candidate facts that can be added to the knowledge graph, identifying the most promising adversary through search-based methods is also expensive. Specifically, the search size for unobserved facts is $|\\xi | \\times ||$ , which, for example in YAGO3-10 KG, can be as many as $4.5 M$ possible facts for a single target prediction."
],
[
"In this section, we propose algorithms to address mentioned challenges by (1) approximating the effect of changing the graph on a target prediction, and (2) using continuous optimization for the discrete search over potential modifications."
],
[
"We first study the addition of a fact to the graph, and then extend it to cover removal as well. To capture the effect of an adversarial modification on the score of a target triple, we need to study the effect of the change on the vector representations of the target triple. We use $$ , $$ , and $$ to denote the embeddings of $s,r,o$ at the solution of $\\operatornamewithlimits{argmin} (G)$ , and when considering the adversarial triple $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $ , we use $$ , $$ , and $$ for the new embeddings of $s,r,o$ , respectively. Thus $$0 is a solution to $$1 , which can also be written as $$2 . Similarly, $$3 s', r', o $$4 $$5 $$6 $$7 o $$8 $$9 $$0 $$1 $$2 $$3 O(n3) $$4 $$5 $$6 (s,r,o)-(s, r, o) $$7 - $$8 s, r = ,) $$9 - $s,r,o$0 (G)= (G)+(s', r', o ) $s,r,o$1 $s,r,o$2 s', r' = ',') $s,r,o$3 = ((s',r',o)) $s,r,o$4 eo (G)=0 $s,r,o$5 eo (G) $s,r,o$6 Ho $s,r,o$7 dd $s,r,o$8 o $s,r,o$9 $\\operatornamewithlimits{argmin} (G)$0 - $\\operatornamewithlimits{argmin} (G)$1 -= $\\operatornamewithlimits{argmin} (G)$2 Ho $\\operatornamewithlimits{argmin} (G)$3 Ho + (1-) s',r's',r' $\\operatornamewithlimits{argmin} (G)$4 Ho $\\operatornamewithlimits{argmin} (G)$5 dd $\\operatornamewithlimits{argmin} (G)$6 d $\\operatornamewithlimits{argmin} (G)$7 s,r,s',r'd $\\operatornamewithlimits{argmin} (G)$8 s, r, o $\\operatornamewithlimits{argmin} (G)$9 s', r', o $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $0 $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $1 $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $2 "
],
[
"Using the approximations provided in the previous section, Eq. () and (), we can use brute force enumeration to find the adversary $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $ . This approach is feasible when removing an observed triple since the search space of such modifications is usually small; it is the number of observed facts that share the object with the target. On the other hand, finding the most influential unobserved fact to add requires search over a much larger space of all possible unobserved facts (that share the object). Instead, we identify the most influential unobserved fact $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $ by using a gradient-based algorithm on vector $_{s^{\\prime },r^{\\prime }}$ in the embedding space (reminder, $_{s^{\\prime },r^{\\prime }}=^{\\prime },^{\\prime })$ ), solving the following continuous optimization problem in $^d$ : *argmaxs', r' (s',r')(s,r,o). After identifying the optimal $_{s^{\\prime }, r^{\\prime }}$ , we still need to generate the pair $(s^{\\prime },r^{\\prime })$ . We design a network, shown in Figure 2 , that maps the vector $_{s^{\\prime },r^{\\prime }}$ to the entity-relation space, i.e., translating it into $(s^{\\prime },r^{\\prime })$ . In particular, we train an auto-encoder where the encoder is fixed to receive the $s$ and $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $0 as one-hot inputs, and calculates $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $1 in the same way as the DistMult and ConvE encoders respectively (using trained embeddings). The decoder is trained to take $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $2 as input and produce $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $3 and $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $4 , essentially inverting $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $5 s, r $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $6 s $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $7 r $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $8 s, r $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $9 We evaluate the performance of our inverter networks (one for each model/dataset) on correctly recovering the pairs of subject and relation from the test set of our benchmarks, given the $_{s^{\\prime },r^{\\prime }}$0 . The accuracy of recovered pairs (and of each argument) is given in Table 1 . As shown, our networks achieve a very high accuracy, demonstrating their ability to invert vectors $_{s^{\\prime },r^{\\prime }}$1 to $_{s^{\\prime },r^{\\prime }}$2 pairs."
],
[
"We evaluate by ( \"Influence Function vs \" ) comparing estimate with the actual effect of the attacks, ( \"Robustness of Link Prediction Models\" ) studying the effect of adversarial attacks on evaluation metrics, ( \"Interpretability of Models\" ) exploring its application to the interpretability of KG representations, and ( \"Finding Errors in Knowledge Graphs\" ) detecting incorrect triples."
],
[
"To evaluate the quality of our approximations and compare with influence function (IF), we conduct leave one out experiments. In this setup, we take all the neighbors of a random target triple as candidate modifications, remove them one at a time, retrain the model each time, and compute the exact change in the score of the target triple. We can use the magnitude of this change in score to rank the candidate triples, and compare this exact ranking with ranking as predicted by: , influence function with and without Hessian matrix, and the original model score (with the intuition that facts that the model is most confident of will have the largest impact when removed). Similarly, we evaluate by considering 200 random triples that share the object entity with the target sample as candidates, and rank them as above. The average results of Spearman's $\\rho $ and Kendall's $\\tau $ rank correlation coefficients over 10 random target samples is provided in Table 3 . performs comparably to the influence function, confirming that our approximation is accurate. Influence function is slightly more accurate because they use the complete Hessian matrix over all the parameters, while we only approximate the change by calculating the Hessian over $$ . The effect of this difference on scalability is dramatic, constraining IF to very small graphs and small embedding dimensionality ( $d\\le 10$ ) before we run out of memory. In Figure 3 , we show the time to compute a single adversary by IF compared to , as we steadily grow the number of entities (randomly chosen subgraphs), averaged over 10 random triples. As it shows, is mostly unaffected by the number of entities while IF increases quadratically. Considering that real-world KGs have tens of thousands of times more entities, making IF unfeasible for them."
],
[
"Now we evaluate the effectiveness of to successfully attack link prediction by adding false facts. The goal here is to identify the attacks for triples in the test data, and measuring their effect on MRR and Hits@ metrics (ranking evaluations) after conducting the attack and retraining the model.",
"Since this is the first work on adversarial attacks for link prediction, we introduce several baselines to compare against our method. For finding the adversarial fact to add for the target triple $\\langle s, r, o \\rangle $ , we consider two baselines: 1) choosing a random fake fact $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $ (Random Attack); 2) finding $(s^{\\prime }, r^{\\prime })$ by first calculating $, )$ and then feeding $-, )$ to the decoder of the inverter function (Opposite Attack). In addition to , we introduce two other alternatives of our method: (1) , that uses to increase the score of fake fact over a test triple, i.e., we find the fake fact the model ranks second after the test triple, and identify the adversary for them, and (2) that selects between and attacks based on which has a higher estimated change in score.",
"All-Test The result of the attack on all test facts as targets is provided in the Table 4 . outperforms the baselines, demonstrating its ability to effectively attack the KG representations. It seems DistMult is more robust against random attacks, while ConvE is more robust against designed attacks. is more effective than since changing the score of a fake fact is easier than of actual facts; there is no existing evidence to support fake facts. We also see that YAGO3-10 models are more robust than those for WN18. Looking at sample attacks (provided in Appendix \"Sample Adversarial Attacks\" ), mostly tries to change the type of the target object by associating it with a subject and a relation for a different entity type.",
"Uncertain-Test To better understand the effect of attacks, we consider a subset of test triples that 1) the model predicts correctly, 2) difference between their scores and the negative sample with the highest score is minimum. This “Uncertain-Test” subset contains 100 triples from each of the original test sets, and we provide results of attacks on this data in Table 4 . The attacks are much more effective in this scenario, causing a considerable drop in the metrics. Further, in addition to significantly outperforming other baselines, they indicate that ConvE's confidence is much more robust.",
"Relation Breakdown We perform additional analysis on the YAGO3-10 dataset to gain a deeper understanding of the performance of our model. As shown in Figure 4 , both DistMult and ConvE provide a more robust representation for isAffiliatedTo and isConnectedTo relations, demonstrating the confidence of models in identifying them. Moreover, the affects DistMult more in playsFor and isMarriedTo relations while affecting ConvE more in isConnectedTo relations.",
"Examples Sample adversarial attacks are provided in Table 5 . attacks mostly try to change the type of the target triple's object by associating it with a subject and a relation that require a different entity types."
],
[
"To be able to understand and interpret why a link is predicted using the opaque, dense embeddings, we need to find out which part of the graph was most influential on the prediction. To provide such explanations for each predictions, we identify the most influential fact using . Instead of focusing on individual predictions, we aggregate the explanations over the whole dataset for each relation using a simple rule extraction technique: we find simple patterns on subgraphs that surround the target triple and the removed fact from , and appear more than $90\\%$ of the time. We only focus on extracting length-2 horn rules, i.e., $R_1(a,c)\\wedge R_2(c,b)\\Rightarrow R(a,b)$ , where $R(a,b)$ is the target and $R_2(c,b)$ is the removed fact. Table 6 shows extracted YAGO3-10 rules that are common to both models, and ones that are not. The rules show several interesting inferences, such that hasChild is often inferred via married parents, and isLocatedIn via transitivity. There are several differences in how the models reason as well; DistMult often uses the hasCapital as an intermediate step for isLocatedIn, while ConvE incorrectly uses isNeighbor. We also compare against rules extracted by BIBREF2 for YAGO3-10 that utilizes the structure of DistMult: they require domain knowledge on types and cannot be applied to ConvE. Interestingly, the extracted rules contain all the rules provided by , demonstrating that can be used to accurately interpret models, including ones that are not interpretable, such as ConvE. These are preliminary steps toward interpretability of link prediction models, and we leave more analysis of interpretability to future work."
],
[
"Here, we demonstrate another potential use of adversarial modifications: finding erroneous triples in the knowledge graph. Intuitively, if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data. Formally, to find the incorrect triple $\\langle s^{\\prime }, r^{\\prime }, o\\rangle $ in the neighborhood of the train triple $\\langle s, r, o\\rangle $ , we need to find the triple $\\langle s^{\\prime },r^{\\prime },o\\rangle $ that results in the least change $\\Delta _{(s^{\\prime },r^{\\prime })}(s,r,o)$ when removed from the graph.",
"To evaluate this application, we inject random triples into the graph, and measure the ability of to detect the errors using our optimization. We consider two types of incorrect triples: 1) incorrect triples in the form of $\\langle s^{\\prime }, r, o\\rangle $ where $s^{\\prime }$ is chosen randomly from all of the entities, and 2) incorrect triples in the form of $\\langle s^{\\prime }, r^{\\prime }, o\\rangle $ where $s^{\\prime }$ and $r^{\\prime }$ are chosen randomly. We choose 100 random triples from the observed graph, and for each of them, add an incorrect triple (in each of the two scenarios) to its neighborhood. Then, after retraining DistMult on this noisy training data, we identify error triples through a search over the neighbors of the 100 facts. The result of choosing the neighbor with the least influence on the target is provided in the Table 7 . When compared with baselines that randomly choose one of the neighbors, or assume that the fact with the lowest score is incorrect, we see that outperforms both of these with a considerable gap, obtaining an accuracy of $42\\%$ and $55\\%$ in detecting errors."
],
[
"Learning relational knowledge representations has been a focus of active research in the past few years, but to the best of our knowledge, this is the first work on conducting adversarial modifications on the link prediction task. Knowledge graph embedding There is a rich literature on representing knowledge graphs in vector spaces that differ in their scoring functions BIBREF21 , BIBREF22 , BIBREF23 . Although is primarily applicable to multiplicative scoring functions BIBREF0 , BIBREF1 , BIBREF2 , BIBREF24 , these ideas apply to additive scoring functions BIBREF18 , BIBREF6 , BIBREF7 , BIBREF25 as well, as we show in Appendix \"First-order Approximation of the Change For TransE\" .",
"Furthermore, there is a growing body of literature that incorporates an extra types of evidence for more informed embeddings such as numerical values BIBREF26 , images BIBREF27 , text BIBREF28 , BIBREF29 , BIBREF30 , and their combinations BIBREF31 . Using , we can gain a deeper understanding of these methods, especially those that build their embeddings wit hmultiplicative scoring functions.",
"Interpretability and Adversarial Modification There has been a significant recent interest in conducting an adversarial attacks on different machine learning models BIBREF16 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 to attain the interpretability, and further, evaluate the robustness of those models. BIBREF20 uses influence function to provide an approach to understanding black-box models by studying the changes in the loss occurring as a result of changes in the training data. In addition to incorporating their established method on KGs, we derive a novel approach that differs from their procedure in two ways: (1) instead of changes in the loss, we consider the changes in the scoring function, which is more appropriate for KG representations, and (2) in addition to searching for an attack, we introduce a gradient-based method that is much faster, especially for “adding an attack triple” (the size of search space make the influence function method infeasible). Previous work has also considered adversaries for KGs, but as part of training to improve their representation of the graph BIBREF37 , BIBREF38 . Adversarial Attack on KG Although this is the first work on adversarial attacks for link prediction, there are two approaches BIBREF39 , BIBREF17 that consider the task of adversarial attack on graphs. There are a few fundamental differences from our work: (1) they build their method on top of a path-based representations while we focus on embeddings, (2) they consider node classification as the target of their attacks while we attack link prediction, and (3) they conduct the attack on small graphs due to restricted scalability, while the complexity of our method does not depend on the size of the graph, but only the neighborhood, allowing us to attack real-world graphs."
],
[
"Motivated by the need to analyze the robustness and interpretability of link prediction models, we present a novel approach for conducting adversarial modifications to knowledge graphs. We introduce , completion robustness and interpretability via adversarial graph edits: identifying the fact to add into or remove from the KG that changes the prediction for a target fact. uses (1) an estimate of the score change for any target triple after adding or removing another fact, and (2) a gradient-based algorithm for identifying the most influential modification. We show that can effectively reduce ranking metrics on link prediction models upon applying the attack triples. Further, we incorporate the to study the interpretability of KG representations by summarizing the most influential facts for each relation. Finally, using , we introduce a novel automated error detection method for knowledge graphs. We have release the open-source implementation of our models at: https://pouyapez.github.io/criage."
],
[
"We would like to thank Matt Gardner, Marco Tulio Ribeiro, Zhengli Zhao, Robert L. Logan IV, Dheeru Dua and the anonymous reviewers for their detailed feedback and suggestions. This work is supported in part by Allen Institute for Artificial Intelligence (AI2) and in part by NSF awards #IIS-1817183 and #IIS-1756023. The views expressed are those of the authors and do not reflect the official policy or position of the funding agencies."
],
[
"We approximate the change on the score of the target triple upon applying attacks other than the $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $ ones. Since each relation appears many times in the training triples, we can assume that applying a single attack will not considerably affect the relations embeddings. As a result, we just need to study the attacks in the form of $\\langle s, r^{\\prime }, o \\rangle $ and $\\langle s, r^{\\prime }, o^{\\prime } \\rangle $ . Defining the scoring function as $\\psi (s,r,o) = , ) \\cdot = _{s,r} \\cdot $ , we further assume that $\\psi (s,r,o) =\\cdot (, ) =\\cdot _{r,o}$ ."
],
[
"Using similar argument as the attacks in the form of $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $ , we can calculate the effect of the attack, $\\overline{\\psi }{(s,r,o)}-\\psi (s, r, o)$ as: (s,r,o)-(s, r, o)=(-) s, r where $_{s, r} = (,)$ .",
"We now derive an efficient computation for $(-)$ . First, the derivative of the loss $(\\overline{G})= (G)+(\\langle s, r^{\\prime }, o^{\\prime } \\rangle )$ over $$ is: es (G) = es (G) - (1-) r', o' where $_{r^{\\prime }, o^{\\prime }} = (^{\\prime },^{\\prime })$ , and $\\varphi = \\sigma (\\psi (s,r^{\\prime },o^{\\prime }))$ . At convergence, after retraining, we expect $\\nabla _{e_s} (\\overline{G})=0$ . We perform first order Taylor approximation of $\\nabla _{e_s} (\\overline{G})$ to get: 0 - (1-)r',o'+",
"(Hs+(1-)r',o' r',o')(-) where $H_s$ is the $d\\times d$ Hessian matrix for $s$ , i.e. second order derivative of the loss w.r.t. $$ , computed sparsely. Solving for $-$ gives us: -=",
"(1-) (Hs + (1-) r',o'r',o')-1 r',o' In practice, $H_s$ is positive definite, making $H_s + \\varphi (1-\\varphi ) _{r^{\\prime },o^{\\prime }}^\\intercal _{r^{\\prime },o^{\\prime }}$ positive definite as well, and invertible. Then, we compute the score change as: (s,r,o)-(s, r, o)= r,o (-) =",
" ((1-) (Hs + (1-) r',o'r',o')-1 r',o')r,o."
],
[
"In this section we approximate the effect of attack in the form of $\\langle s, r^{\\prime }, o \\rangle $ . In contrast to $\\langle s^{\\prime }, r^{\\prime }, o \\rangle $ attacks, for this scenario we need to consider the change in the $$ , upon applying the attack, in approximation of the change in the score as well. Using previous results, we can approximate the $-$ as: -=",
"(1-) (Ho + (1-) s,r's,r')-1 s,r' and similarly, we can approximate $-$ as: -=",
" (1-) (Hs + (1-) r',or',o)-1 r',o where $H_s$ is the Hessian matrix over $$ . Then using these approximations: s,r(-) =",
" s,r ((1-) (Ho + (1-) s,r's,r')-1 s,r') and: (-) r,o=",
" ((1-) (Hs + (1-) r',or',o)-1 r',o) r,o and then calculate the change in the score as: (s,r,o)-(s, r, o)=",
" s,r.(-) +(-).r,o =",
" s,r ((1-) (Ho + (1-) s,r's,r')-1 s,r')+",
" ((1-) (Hs + (1-) r',or',o)-1 r',o) r, o"
],
[
"In here we derive the approximation of the change in the score upon applying an adversarial modification for TransE BIBREF18 . Using similar assumptions and parameters as before, to calculate the effect of the attack, $\\overline{\\psi }{(s,r,o)}$ (where $\\psi {(s,r,o)}=|+-|$ ), we need to compute $$ . To do so, we need to derive an efficient computation for $$ . First, the derivative of the loss $(\\overline{G})= (G)+(\\langle s^{\\prime }, r^{\\prime }, o \\rangle )$ over $$ is: eo (G) = eo (G) + (1-) s', r'-(s',r',o) where $_{s^{\\prime }, r^{\\prime }} = ^{\\prime }+ ^{\\prime }$ , and $\\varphi = \\sigma (\\psi (s^{\\prime },r^{\\prime },o))$ . At convergence, after retraining, we expect $\\nabla _{e_o} (\\overline{G})=0$ . We perform first order Taylor approximation of $\\nabla _{e_o} (\\overline{G})$ to get: 0",
" (1-) (s', r'-)(s',r',o)+(Ho - Hs',r',o)(-)",
" Hs',r',o = (1-)(s', r'-)(s', r'-)(s',r',o)2+",
" 1-(s',r',o)-(1-) (s', r'-)(s', r'-)(s',r',o)3 where $H_o$ is the $d\\times d$ Hessian matrix for $o$ , i.e., second order derivative of the loss w.r.t. $$ , computed sparsely. Solving for $$ gives us: = -(1-) (Ho - Hs',r',o)-1 (s', r'-)(s',r',o)",
" + Then, we compute the score change as: (s,r,o)= |+-|",
"= |++(1-) (Ho - Hs',r',o)-1",
" (s', r'-)(s',r',o) - |",
"Calculating this expression is efficient since $H_o$ is a $d\\times d$ matrix."
],
[
"In this section, we provide the output of the for some target triples. Sample adversarial attacks are provided in Table 5 . As it shows, attacks mostly try to change the type of the target triple's object by associating it with a subject and a relation that require a different entity types."
]
]
} | {
"question": [
"What datasets are used to evaluate this approach?",
"How is this approach used to detect incorrect facts?",
"Can this adversarial approach be used to directly improve model accuracy?"
],
"question_id": [
"bc9c31b3ce8126d1d148b1025c66f270581fde10",
"185841e979373808d99dccdade5272af02b98774",
"d427e3d41c4c9391192e249493be23926fc5d2e9"
],
"nlp_background": [
"five",
"five",
"five"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"link prediction",
"link prediction",
"link prediction"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": " Kinship and Nations knowledge graphs, YAGO3-10 and WN18KGs knowledge graphs ",
"evidence": [
"FLOAT SELECTED: Table 2: Data Statistics of the benchmarks."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Data Statistics of the benchmarks."
]
},
{
"unanswerable": false,
"extractive_spans": [
"WN18 and YAGO3-10"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Since the setting is quite different from traditional adversarial attacks, search for link prediction adversaries brings up unique challenges. To find these minimal changes for a target link, we need to identify the fact that, when added into or removed from the graph, will have the biggest impact on the predicted score of the target fact. Unfortunately, computing this change in the score is expensive since it involves retraining the model to recompute the embeddings. We propose an efficient estimate of this score change by approximating the change in the embeddings using Taylor expansion. The other challenge in identifying adversarial modifications for link prediction, especially when considering addition of fake facts, is the combinatorial search space over possible facts, which is intractable to enumerate. We introduce an inverter of the original embedding model, to decode the embeddings to their corresponding graph components, making the search of facts tractable by performing efficient gradient-based continuous optimization. We evaluate our proposed methods through following experiments. First, on relatively small KGs, we show that our approximations are accurate compared to the true change in the score. Second, we show that our additive attacks can effectively reduce the performance of state of the art models BIBREF2 , BIBREF10 up to $27.3\\%$ and $50.7\\%$ in Hits@1 for two large KGs: WN18 and YAGO3-10. We also explore the utility of adversarial modifications in explaining the model predictions by presenting rule-like descriptions of the most influential neighbors. Finally, we use adversaries to detect errors in the KG, obtaining up to $55\\%$ accuracy in detecting errors."
],
"highlighted_evidence": [
"WN18 and YAGO3-10",
"Second, we show that our additive attacks can effectively reduce the performance of state of the art models BIBREF2 , BIBREF10 up to $27.3\\%$ and $50.7\\%$ in Hits@1 for two large KGs: WN18 and YAGO3-10. "
]
}
],
"annotation_id": [
"8f1f61837454d9f482cd81ea51f1eabd07870b6f",
"a922089b7e48e898c731a414d8b871e45fc72666"
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a",
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data. "
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Here, we demonstrate another potential use of adversarial modifications: finding erroneous triples in the knowledge graph. Intuitively, if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data. Formally, to find the incorrect triple $\\langle s^{\\prime }, r^{\\prime }, o\\rangle $ in the neighborhood of the train triple $\\langle s, r, o\\rangle $ , we need to find the triple $\\langle s^{\\prime },r^{\\prime },o\\rangle $ that results in the least change $\\Delta _{(s^{\\prime },r^{\\prime })}(s,r,o)$ when removed from the graph."
],
"highlighted_evidence": [
"if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data.",
"Here, we demonstrate another potential use of adversarial modifications: finding erroneous triples in the knowledge graph. Intuitively, if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data. Formally, to find the incorrect triple $\\langle s^{\\prime }, r^{\\prime }, o\\rangle $ in the neighborhood of the train triple $\\langle s, r, o\\rangle $ , we need to find the triple $\\langle s^{\\prime },r^{\\prime },o\\rangle $ that results in the least change $\\Delta _{(s^{\\prime },r^{\\prime })}(s,r,o)$ when removed from the graph."
]
}
],
"annotation_id": [
"ea95d6212fa8ce6e137058f83fa16c11f6c1c871"
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"To evaluate this application, we inject random triples into the graph, and measure the ability of to detect the errors using our optimization. We consider two types of incorrect triples: 1) incorrect triples in the form of $\\langle s^{\\prime }, r, o\\rangle $ where $s^{\\prime }$ is chosen randomly from all of the entities, and 2) incorrect triples in the form of $\\langle s^{\\prime }, r^{\\prime }, o\\rangle $ where $s^{\\prime }$ and $r^{\\prime }$ are chosen randomly. We choose 100 random triples from the observed graph, and for each of them, add an incorrect triple (in each of the two scenarios) to its neighborhood. Then, after retraining DistMult on this noisy training data, we identify error triples through a search over the neighbors of the 100 facts. The result of choosing the neighbor with the least influence on the target is provided in the Table 7 . When compared with baselines that randomly choose one of the neighbors, or assume that the fact with the lowest score is incorrect, we see that outperforms both of these with a considerable gap, obtaining an accuracy of $42\\%$ and $55\\%$ in detecting errors."
],
"highlighted_evidence": [
"When compared with baselines that randomly choose one of the neighbors, or assume that the fact with the lowest score is incorrect, we see that outperforms both of these with a considerable gap, obtaining an accuracy of $42\\%$ and $55\\%$ in detecting errors."
]
}
],
"annotation_id": [
"71d59a65743aca17c4b889d73bece4a6fac89739"
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
}
]
} | {
"caption": [
"Figure 1: Completion Robustness and Interpretability via Adversarial Graph Edits (CRIAGE): Change in the graph structure that changes the prediction of the retrained model, where (a) is the original sub-graph of the KG, (b) removes a neighboring link of the target, resulting in a change in the prediction, and (c) shows the effect of adding an attack triple on the target. These modifications were identified by our proposed approach.",
"Figure 2: Inverter Network The architecture of our inverter function that translate zs,r to its respective (s̃, r̃). The encoder component is fixed to be the encoder network of DistMult and ConvE respectively.",
"Table 1: Inverter Functions Accuracy, we calculate the accuracy of our inverter networks in correctly recovering the pairs of subject and relation from the test set of our benchmarks.",
"Table 2: Data Statistics of the benchmarks.",
"Figure 3: Influence function vs CRIAGE. We plot the average time (over 10 facts) of influence function (IF) and CRIAGE to identify an adversary as the number of entities in the Kinship KG is varied (by randomly sampling subgraphs of the KG). Even with small graphs and dimensionality, IF quickly becomes impractical.",
"Table 3: Ranking modifications by their impact on the target. We compare the true ranking of candidate triples with a number of approximations using ranking correlation coefficients. We compare our method with influence function (IF) with and without Hessian, and ranking the candidates based on their score, on two KGs (d = 10, averaged over 10 random targets). For the sake of brevity, we represent the Spearman’s ρ and Kendall’s τ rank correlation coefficients simply as ρ and τ .",
"Table 4: Robustness of Representation Models, the effect of adversarial attack on link prediction task. We consider two scenario for the target triples, 1) choosing the whole test dataset as the targets (All-Test) and 2) choosing a subset of test data that models are uncertain about them (Uncertain-Test).",
"Figure 4: Per-Relation Breakdown showing the effect of CRIAGE-Add on different relations in YAGO3-10.",
"Table 5: Extracted Rules for identifying the most influential link. We extract the patterns that appear more than 90% times in the neighborhood of the target triple. The output of CRIAGE-Remove is presented in red.",
"Table 6: Error Detection Accuracy in the neighborhood of 100 chosen samples. We choose the neighbor with the least value of ∆(s′,r′)(s, r, o) as the incorrect fact. This experiment assumes we know each target fact has exactly one error.",
"Table 7: Top adversarial triples for target samples."
],
"file": [
"2-Figure1-1.png",
"4-Figure2-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"5-Figure3-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"7-Figure4-1.png",
"7-Table5-1.png",
"8-Table6-1.png",
"12-Table7-1.png"
]
} |
1808.05902 | Learning Supervised Topic Models for Classification and Regression from Crowds | The growing need to analyze large collections of documents has led to great developments in topic modeling. Since documents are frequently associated with other related variables, such as labels or ratings, much interest has been placed on supervised topic models. However, the nature of most annotation tasks, prone to ambiguity and noise, often with high volumes of documents, deem learning under a single-annotator assumption unrealistic or unpractical for most real-world applications. In this article, we propose two supervised topic models, one for classification and another for regression problems, which account for the heterogeneity and biases among different annotators that are encountered in practice when learning from crowds. We develop an efficient stochastic variational inference algorithm that is able to scale to very large datasets, and we empirically demonstrate the advantages of the proposed model over state-of-the-art approaches. | {
"section_name": [
"Introduction",
"Supervised topic models",
"Learning from multiple annotators",
"Classification model",
"Proposed model",
"Approximate inference",
"Parameter estimation",
"Stochastic variational inference",
"Document classification",
"Regression model",
"Experiments",
"Classification",
"Regression",
"Conclusion",
"Acknowledgment"
],
"paragraphs": [
[
"Topic models, such as latent Dirichlet allocation (LDA), allow us to analyze large collections of documents by revealing their underlying themes, or topics, and how each document exhibits them BIBREF0 . Therefore, it is not surprising that topic models have become a standard tool in data analysis, with many applications that go even beyond their original purpose of modeling textual data, such as analyzing images BIBREF1 , BIBREF2 , videos BIBREF3 , survey data BIBREF4 or social networks data BIBREF5 .",
"Since documents are frequently associated with other variables such as labels, tags or ratings, much interest has been placed on supervised topic models BIBREF6 , which allow the use of that extra information to “guide\" the topics discovery. By jointly learning the topics distributions and a classification or regression model, supervised topic models have been shown to outperform the separate use of their unsupervised analogues together with an external regression/classification algorithm BIBREF2 , BIBREF7 .",
"Supervised topics models are then state-of-the-art approaches for predicting target variables associated with complex high-dimensional data, such as documents or images. Unfortunately, the size of modern datasets makes the use of a single annotator unrealistic and unpractical for the majority of the real-world applications that involve some form of human labeling. For instance, the popular Reuters-21578 benchmark corpus was categorized by a group of personnel from Reuters Ltd and Carnegie Group, Inc. Similarly, the LabelMe project asks volunteers to annotate images from a large collection using an online tool. Hence, it is seldom the case where a single oracle labels an entire collection.",
"Furthermore, the Web, through its social nature, also exploits the wisdom of crowds to annotate large collections of documents and images. By categorizing texts, tagging images or rating products and places, Web users are generating large volumes of labeled content. However, when learning supervised models from crowds, the quality of labels can vary significantly due to task subjectivity and differences in annotator reliability (or bias) BIBREF8 , BIBREF9 . If we consider a sentiment analysis task, it becomes clear that the subjectiveness of the exercise is prone to generate considerably distinct labels from different annotators. Similarly, online product reviews are known to vary considerably depending on the personal biases and volatility of the reviewer's opinions. It is therefore essential to account for these issues when learning from this increasingly common type of data. Hence, the interest of researchers on building models that take the reliabilities of different annotators into consideration and mitigate the effect of their biases has spiked during the last few years (e.g. BIBREF10 , BIBREF11 ).",
"The increasing popularity of crowdsourcing platforms like Amazon Mechanical Turk (AMT) has further contributed to the recent advances in learning from crowds. This kind of platforms offers a fast, scalable and inexpensive solution for labeling large amounts of data. However, their heterogeneous nature in terms of contributors makes their straightforward application prone to many sorts of labeling noise and bias. Hence, a careless use of crowdsourced data as training data risks generating flawed models.",
"In this article, we propose a fully generative supervised topic model that is able to account for the different reliabilities of multiple annotators and correct their biases. The proposed model is then capable of jointly modeling the words in documents as arising from a mixture of topics, the latent true target variables as a result of the empirical distribution over topics of the documents, and the labels of the multiple annotators as noisy versions of that latent ground truth. We propose two different models, one for classification BIBREF12 and another for regression problems, thus covering a very wide range of possible practical applications, as we empirically demonstrate. Since the majority of the tasks for which multiple annotators are used generally involve complex data such as text, images and video, by developing a multi-annotator supervised topic model we are contributing with a powerful tool for learning predictive models of complex high-dimensional data from crowds.",
"Given that the increasing sizes of modern datasets can pose a problem for obtaining human labels as well as for Bayesian inference, we propose an efficient stochastic variational inference algorithm BIBREF13 that is able to scale to very large datasets. We empirically show, using both simulated and real multiple-annotator labels obtained from AMT for popular text and image collections, that the proposed models are able to outperform other state-of-the-art approaches in both classification and regression tasks. We further show the computational and predictive advantages of the stochastic variational inference algorithm over its batch counterpart."
],
[
"Latent Dirichlet allocation (LDA) soon proved to be a powerful tool for modeling documents BIBREF0 and images BIBREF1 by extracting their underlying topics, where topics are probability distributions across words, and each document is characterized by a probability distribution across topics. However, the need to model the relationship between documents and labels quickly gave rise to many supervised variants of LDA. One of the first notable works was that of supervised LDA (sLDA) BIBREF6 . By extending LDA through the inclusion of a response variable that is linearly dependent on the mean topic-assignments of the words in a document, sLDA is able to jointly model the documents and their responses, in order to find latent topics that will best predict the response variables for future unlabeled documents. Although initially developed for general continuous response variables, sLDA was later extended to classification problems BIBREF2 , by modeling the relationship between topic-assignments and labels with a softmax function as in logistic regression.",
"From a classification perspective, there are several ways in which document classes can be included in LDA. The most natural one in this setting is probably the sLDA approach, since the classes are directly dependent on the empirical topic mixture distributions. This approach is coherent with the generative perspective of LDA but, nevertheless, several discriminative alternatives also exist. For example, DiscLDA BIBREF14 introduces a class-dependent linear transformation on the topic mixture proportions of each document, such that the per-word topic assignments are drawn from linearly transformed mixture proportions. The class-specific transformation matrices are then able to reposition the topic mixture proportions so that documents with the same class labels have similar topics mixture proportions. The transformation matrices can be estimated by maximizing the conditional likelihood of response variables as the authors propose BIBREF14 .",
"An alternative way of including classes in LDA for supervision is the one proposed in the Labeled-LDA model BIBREF15 . Labeled-LDA is a variant of LDA that incorporates supervision by constraining the topic model to assign to a document only topics that correspond to its label set. While this allows for multiple labels per document, it is restrictive in the sense that the number of topics needs to be the same as the number of possible labels.",
"From a regression perspective, other than sLDA, the most relevant approaches are the Dirichlet-multimonial regression BIBREF16 and the inverse regression topic models BIBREF17 . The Dirichlet-multimonial regression (DMR) topic model BIBREF16 includes a log-linear prior on the document's mixture proportions that is a function of a set of arbitrary features, such as author, date, publication venue or references in scientific articles. The inferred Dirichlet-multinomial distribution can then be used to make predictions about the values of theses features. The inverse regression topic model (IRTM) BIBREF17 is a mixed-membership extension of the multinomial inverse regression (MNIR) model proposed in BIBREF18 that exploits the topical structure of text corpora to improve its predictions and facilitate exploratory data analysis. However, this results in a rather complex and inefficient inference procedure. Furthermore, making predictions in the IRTM is not trivial. For example, MAP estimates of targets will be in a different scale than the original document's metadata. Hence, the authors propose the use of a linear model to regress metadata values onto their MAP predictions.",
"The approaches discussed so far rely on likelihood-based estimation procedures. The work in BIBREF7 contrasts with these approaches by proposing MedLDA, a supervised topic model that utilizes the max-margin principle for estimation. Despite its margin-based advantages, MedLDA looses the probabilistic interpretation of the document classes given the topic mixture distributions. On the contrary, in this article we propose a fully generative probabilistic model of the answers of multiple annotators and of the words of documents arising from a mixture of topics."
],
[
"Learning from multiple annotators is an increasingly important research topic. Since the early work of Dawid and Skeene BIBREF19 , who attempted to obtain point estimates of the error rates of patients given repeated but conflicting responses to various medical questions, many approaches have been proposed. These usually rely on latent variable models. For example, in BIBREF20 the authors propose a model to estimate the ground truth from the labels of multiple experts, which is then used to train a classifier.",
"While earlier works usually focused on estimating the ground truth and the error rates of different annotators, recent works are more focused on the problem of learning classifiers using multiple-annotator data. This idea was explored by Raykar et al. BIBREF21 , who proposed an approach for jointly learning the levels of expertise of different annotators and the parameters of a logistic regression classifier, by modeling the ground truth labels as latent variables. This work was later extended in BIBREF11 by considering the dependencies of the annotators' labels on the instances they are labeling, and also in BIBREF22 through the use of Gaussian process classifiers. The model proposed in this article for classification problems shares the same intuition with this line of work and models the true labels as latent variables. However, it differs significantly by using a fully Bayesian approach for estimating the reliabilities and biases of the different annotators. Furthermore, it considers the problems of learning a low-dimensional representation of the input data (through topic modeling) and modeling the answers of multiple annotators jointly, providing an efficient stochastic variational inference algorithm.",
"Despite the considerable amount of approaches for learning classifiers from the noisy answers of multiple annotators, for continuous response variables this problem has been approached in a much smaller extent. For example, Groot et al. BIBREF23 address this problem in the context of Gaussian processes. In their work, the authors assign a different variance to the likelihood of the data points provided by the different annotators, thereby allowing them to have different noise levels, which can be estimated by maximizing the marginal likelihood of the data. Similarly, the authors in BIBREF21 propose an extension of their own classification approach to regression problems by assigning different variances to the Gaussian noise models of the different annotators. In this article, we take this idea one step further by also considering a per-annotator bias parameter, which gives the proposed model the ability to overcome certain personal tendencies in the annotators labeling styles that are quite common, for example, in product ratings and document reviews. Furthermore, we empirically validate the proposed model using real multi-annotator data obtained from Amazon Mechanical Turk. This contrasts with the previously mentioned works, which rely only on simulated annotators."
],
[
"In this section, we develop a multi-annotator supervised topic model for classification problems. The model for regression settings will be presented in Section SECREF5 . We start by deriving a (batch) variational inference algorithm for approximating the posterior distribution over the latent variables and an algorithm to estimate the model parameters. We then develop a stochastic variational inference algorithm that gives the model the capability of handling large collections of documents. Finally, we show how to use the learned model to classify new documents."
],
[
"Let INLINEFORM0 be an annotated corpus of size INLINEFORM1 , where each document INLINEFORM2 is given a set of labels INLINEFORM3 from INLINEFORM4 distinct annotators. We can take advantage of the inherent topical structure of documents and model their words as arising from a mixture of topics, each being defined as a distribution over the words in a vocabulary, as in LDA. In LDA, the INLINEFORM5 word, INLINEFORM6 , in a document INLINEFORM7 is provided a discrete topic-assignment INLINEFORM8 , which is drawn from the documents' distribution over topics INLINEFORM9 . This allows us to build lower-dimensional representations of documents, which we can explore to build classification models by assigning coefficients INLINEFORM10 to the mean topic-assignment of the words in the document, INLINEFORM11 , and applying a softmax function in order to obtain a distribution over classes. Alternatively, one could consider more flexible models such as Gaussian processes, however that would considerably increase the complexity of inference.",
"Unfortunately, a direct mapping between document classes and the labels provided by the different annotators in a multiple-annotator setting would correspond to assuming that they are all equally reliable, an assumption that is violated in practice, as previous works clearly demonstrate (e.g. BIBREF8 , BIBREF9 ). Hence, we assume the existence of a latent ground truth class, and model the labels from the different annotators using a noise model that states that, given a true class INLINEFORM0 , each annotator INLINEFORM1 provides the label INLINEFORM2 with some probability INLINEFORM3 . Hence, by modeling the matrix INLINEFORM4 we are in fact modeling a per-annotator (normalized) confusion matrix, which allows us to account for their different levels of expertise and correct their potential biases.",
"The generative process of the proposed model for classification problems can then be summarized as follows:",
"For each annotator INLINEFORM0 ",
"For each class INLINEFORM0 ",
"Draw reliability parameter INLINEFORM0 ",
"For each topic INLINEFORM0 ",
"Draw topic distribution INLINEFORM0 ",
"For each document INLINEFORM0 ",
"Draw topic proportions INLINEFORM0 ",
"For the INLINEFORM0 word",
"Draw topic assignment INLINEFORM0 ",
"Draw word INLINEFORM0 ",
"Draw latent (true) class INLINEFORM0 ",
"For each annotator INLINEFORM0 ",
"Draw annotator's label INLINEFORM0 ",
"where INLINEFORM0 denotes the set of annotators that labeled the INLINEFORM1 document, INLINEFORM2 , and the softmax is given by DISPLAYFORM0 ",
"Fig. FIGREF20 shows a graphical model representation of the proposed model, where INLINEFORM0 denotes the number of topics, INLINEFORM1 is the number of classes, INLINEFORM2 is the total number of annotators and INLINEFORM3 is the number of words in the document INLINEFORM4 . Shaded nodes are used to distinguish latent variable from the observed ones and small solid circles are used to denote model parameters. Notice that we included a Dirichlet prior over the topics INLINEFORM5 to produce a smooth posterior and control sparsity. Similarly, instead of computing maximum likelihood or MAP estimates for the annotators reliability parameters INLINEFORM6 , we place a Dirichlet prior over these variables and perform approximate Bayesian inference. This contrasts with previous works on learning classification models from crowds BIBREF21 , BIBREF24 .",
"For developing a multi-annotator supervised topic model for regression, we shall follow a similar intuition as the one we considered for classification. Namely, we shall assume that, for a given document INLINEFORM0 , each annotator provides a noisy version, INLINEFORM1 , of the true (continuous) target variable, which we denote by INLINEFORM2 . This can be, for example, the true rating of a product or the true sentiment of a document. Assuming that each annotator INLINEFORM3 has its own personal bias INLINEFORM4 and precision INLINEFORM5 (inverse variance), and assuming a Gaussian noise model for the annotators' answers, we have that DISPLAYFORM0 ",
" This approach is therefore more powerful than previous works BIBREF21 , BIBREF23 , where a single precision parameter was used to model the annotators' expertise. Fig. FIGREF45 illustrates this intuition for 4 annotators, represented by different colors. The “green annotator\" is the best one, since he is right on the target and his answers vary very little (low bias, high precision). The “yellow annotator\" has a low bias, but his answers are very uncertain, as they can vary a lot. Contrarily, the “blue annotator\" is very precise, but consistently over-estimates the true target (high bias, high precision). Finally, the “red annotator\" corresponds to the worst kind of annotator (with high bias and low precision).",
"Having specified a model for annotators answers given the true targets, the only thing left is to do is to specify a model of the latent true targets INLINEFORM0 given the empirical topic mixture distributions INLINEFORM1 . For this, we shall keep things simple and assume a linear model as in sLDA BIBREF6 . The generative process of the proposed model for continuous target variables can then be summarized as follows:",
"For each annotator INLINEFORM0 ",
"For each class INLINEFORM0 ",
"Draw reliability parameter INLINEFORM0 ",
"For each topic INLINEFORM0 ",
"Draw topic distribution INLINEFORM0 ",
"For each document INLINEFORM0 ",
"Draw topic proportions INLINEFORM0 ",
"For the INLINEFORM0 word",
"Draw topic assignment INLINEFORM0 ",
"Draw word INLINEFORM0 ",
"Draw latent (true) target INLINEFORM0 ",
"For each annotator INLINEFORM0 ",
"Draw answer INLINEFORM0 ",
"Fig. FIGREF60 shows a graphical representation of the proposed model."
],
[
"Given a dataset INLINEFORM0 , the goal of inference is to compute the posterior distribution of the per-document topic proportions INLINEFORM1 , the per-word topic assignments INLINEFORM2 , the per-topic distribution over words INLINEFORM3 , the per-document latent true class INLINEFORM4 , and the per-annotator confusion parameters INLINEFORM5 . As with LDA, computing the exact posterior distribution of the latent variables is computationally intractable. Hence, we employ mean-field variational inference to perform approximate Bayesian inference.",
"Variational inference methods seek to minimize the KL divergence between the variational and the true posterior distribution. We assume a fully-factorized (mean-field) variational distribution of the form DISPLAYFORM0 ",
" where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 are variational parameters. Table TABREF23 shows the correspondence between variational parameters and the original parameters.",
"Let INLINEFORM0 denote the model parameters. Following BIBREF25 , the KL minimization can be equivalently formulated as maximizing the following lower bound on the log marginal likelihood DISPLAYFORM0 ",
" which we maximize using coordinate ascent.",
"Optimizing INLINEFORM0 w.r.t. INLINEFORM1 and INLINEFORM2 gives the same coordinate ascent updates as in LDA BIBREF0 DISPLAYFORM0 ",
"The variational Dirichlet parameters INLINEFORM0 can be optimized by collecting only the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0 ",
" where INLINEFORM0 denotes the documents labeled by the INLINEFORM1 annotator, INLINEFORM2 , and INLINEFORM3 and INLINEFORM4 are the gamma and digamma functions, respectively. Taking derivatives of INLINEFORM5 w.r.t. INLINEFORM6 and setting them to zero, yields the following update DISPLAYFORM0 ",
"Similarly, the coordinate ascent updates for the documents distribution over classes INLINEFORM0 can be found by considering the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0 ",
" where INLINEFORM0 . Adding the necessary Lagrange multipliers to ensure that INLINEFORM1 and setting the derivatives w.r.t. INLINEFORM2 to zero gives the following update DISPLAYFORM0 ",
" Observe how the variational distribution over the true classes results from a combination between the dot product of the inferred mean topic assignment INLINEFORM0 with the coefficients INLINEFORM1 and the labels INLINEFORM2 from the multiple annotators “weighted\" by their expected log probability INLINEFORM3 .",
"The main difficulty of applying standard variational inference methods to the proposed model is the non-conjugacy between the distribution of the mean topic-assignment INLINEFORM0 and the softmax. Namely, in the expectation DISPLAYFORM0 ",
" the second term is intractable to compute. We can make progress by applying Jensen's inequality to bound it as follows DISPLAYFORM0 ",
" where INLINEFORM0 , which is constant w.r.t. INLINEFORM1 . This local variational bound can be made tight by noticing that INLINEFORM2 , where equality holds if and only if INLINEFORM3 . Hence, given the current parameter estimates INLINEFORM4 , if we set INLINEFORM5 and INLINEFORM6 then, for an individual parameter INLINEFORM7 , we have that DISPLAYFORM0 ",
" Using this local bound to approximate the expectation of the log-sum-exp term, and taking derivatives of the evidence lower bound w.r.t. INLINEFORM0 with the constraint that INLINEFORM1 , yields the following fix-point update DISPLAYFORM0 ",
" where INLINEFORM0 denotes the size of the vocabulary. Notice how the per-word variational distribution over topics INLINEFORM1 depends on the variational distribution over the true class label INLINEFORM2 .",
"The variational inference algorithm iterates between Eqs. EQREF25 - EQREF33 until the evidence lower bound, Eq. EQREF24 , converges. Additional details are provided as supplementary material.",
"",
"",
"The goal of inference is to compute the posterior distribution of the per-document topic proportions INLINEFORM0 , the per-word topic assignments INLINEFORM1 , the per-topic distribution over words INLINEFORM2 and the per-document latent true targets INLINEFORM3 . As we did for the classification model, we shall develop a variational inference algorithm using coordinate ascent. The lower-bound on the log marginal likelihood is now given by DISPLAYFORM0 ",
" where INLINEFORM0 are the model parameters. We assume a fully-factorized (mean-field) variational distribution INLINEFORM1 of the form DISPLAYFORM0 ",
" where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 are the variational parameters. Notice the new Gaussian term, INLINEFORM5 , corresponding to the approximate posterior distribution of the unobserved true targets.",
"Optimizing the variational objective INLINEFORM0 w.r.t. INLINEFORM1 and INLINEFORM2 yields the same updates from Eqs. EQREF25 and . Optimizing w.r.t. INLINEFORM3 gives a similar update to the one in sLDA BIBREF6 DISPLAYFORM0 ",
" where we defined INLINEFORM0 . Notice how this update differs only from the one in BIBREF6 by replacing the true target variable by its expected value under the variational distribution, which is given by INLINEFORM1 .",
"The only variables left for doing inference on are then the latent true targets INLINEFORM0 . The variational distribution of INLINEFORM1 is governed by two parameters: a mean INLINEFORM2 and a variance INLINEFORM3 . Collecting all the terms in INLINEFORM4 that contain INLINEFORM5 gives DISPLAYFORM0 ",
" Taking derivatives of INLINEFORM0 and setting them to zero gives the following update for INLINEFORM1 DISPLAYFORM0 ",
" Notice how the value of INLINEFORM0 is a weighted average of what the linear regression model on the empirical topic mixture believes the true target should be, and the bias-corrected answers of the different annotators weighted by their individual precisions.",
"As for INLINEFORM0 , we can optimize INLINEFORM1 w.r.t. INLINEFORM2 by collecting all terms that contain INLINEFORM3 DISPLAYFORM0 ",
" and taking derivatives, yielding the update DISPLAYFORM0 "
],
[
"The model parameters are INLINEFORM0 . The parameters INLINEFORM1 of the Dirichlet priors can be regarded as hyper-parameters of the proposed model. As with many works on topic models (e.g. BIBREF26 , BIBREF2 ), we assume hyper-parameters to be fixed, since they can be effectively selected by grid-search procedures which are able to explore well the parameter space without suffering from local optima. Our focus is then on estimating the coefficients INLINEFORM2 using a variational EM algorithm. Therefore, in the E-step we use the variational inference algorithm from section SECREF21 to estimate the posterior distribution of the latent variables, and in the M-step we find maximum likelihood estimates of INLINEFORM3 by maximizing the evidence lower bound INLINEFORM4 . Unfortunately, taking derivatives of INLINEFORM5 w.r.t. INLINEFORM6 does not yield a closed-form solution. Hence, we use a numerical method, namely L-BFGS BIBREF27 , to find an optimum. The objective function and gradients are given by DISPLAYFORM0 ",
" where, for convenience, we defined the following variable: INLINEFORM0 .",
"The parameters of the proposed regression model are INLINEFORM0 . As we did for the classification model, we shall assume the Dirichlet parameters, INLINEFORM1 and INLINEFORM2 , to be fixed. Similarly, we shall assume that the variance of the true targets, INLINEFORM3 , to be constant. The only parameters left to estimate are then the regression coefficients INLINEFORM4 and the annotators biases, INLINEFORM5 , and precisions, INLINEFORM6 , which we estimate using variational Bayesian EM.",
"Since the latent true targets are now linear functions of the documents' empirical topic mixtures (i.e. there is no softmax function), we can find a closed form solution for the regression coefficients INLINEFORM0 . Taking derivatives of INLINEFORM1 w.r.t. INLINEFORM2 and setting them to zero, gives the following solution for INLINEFORM3 DISPLAYFORM0 ",
" where DISPLAYFORM0 ",
"We can find maximum likelihood estimates for the annotator biases INLINEFORM0 by optimizing the lower bound on the marginal likelihood. The terms in INLINEFORM1 that involve INLINEFORM2 are DISPLAYFORM0 ",
" Taking derivatives w.r.t. INLINEFORM0 gives the following estimate for the bias of the INLINEFORM1 annotator DISPLAYFORM0 ",
"Similarly, we can find maximum likelihood estimates for the precisions INLINEFORM0 of the different annotators by considering the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0 ",
" The maximum likelihood estimate for the precision (inverse variance) of the INLINEFORM0 annotator is then given by DISPLAYFORM0 ",
"Given a set of fitted parameters, it is then straightforward to make predictions for new documents: it is just necessary to infer the (approximate) posterior distribution over the word-topic assignments INLINEFORM0 for all the words using the coordinates ascent updates of standard LDA (Eqs. EQREF25 and EQREF42 ), and then use the mean topic assignments INLINEFORM1 to make predictions INLINEFORM2 ."
],
[
"In Section SECREF21 , we proposed a batch coordinate ascent algorithm for doing variational inference in the proposed model. This algorithm iterates between analyzing every document in the corpus to infer the local hidden structure, and estimating the global hidden variables. However, this can be inefficient for large datasets, since it requires a full pass through the data at each iteration before updating the global variables. In this section, we develop a stochastic variational inference algorithm BIBREF13 , which follows noisy estimates of the gradients of the evidence lower bound INLINEFORM0 .",
"Based on the theory of stochastic optimization BIBREF28 , we can find unbiased estimates of the gradients by subsampling a document (or a mini-batch of documents) from the corpus, and using it to compute the gradients as if that document was observed INLINEFORM0 times. Hence, given an uniformly sampled document INLINEFORM1 , we use the current posterior distributions of the global latent variables, INLINEFORM2 and INLINEFORM3 , and the current coefficient estimates INLINEFORM4 , to compute the posterior distribution over the local hidden variables INLINEFORM5 , INLINEFORM6 and INLINEFORM7 using Eqs. EQREF25 , EQREF33 and EQREF29 respectively. These posteriors are then used to update the global variational parameters, INLINEFORM8 and INLINEFORM9 by taking a step of size INLINEFORM10 in the direction of the noisy estimates of the natural gradients.",
"Algorithm SECREF37 describes a stochastic variational inference algorithm for the proposed model. Given an appropriate schedule for the learning rates INLINEFORM0 , such that INLINEFORM1 and INLINEFORM2 , the stochastic optimization algorithm is guaranteed to converge to a local maximum of the evidence lower bound BIBREF28 .",
"[t] Stochastic variational inference for the proposed classification model [1] Initialize INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 Set t = t + 1 Sample a document INLINEFORM6 uniformly from the corpus Compute INLINEFORM7 using Eq. EQREF33 , for INLINEFORM8 Compute INLINEFORM9 using Eq. EQREF25 Compute INLINEFORM10 using Eq. EQREF29 local parameters INLINEFORM11 , INLINEFORM12 and INLINEFORM13 converge Compute step-size INLINEFORM14 Update topics variational parameters DISPLAYFORM0 ",
" Update annotators confusion parameters DISPLAYFORM0 ",
" global convergence criterion is met",
"As we did for the classification model from Section SECREF4 , we can envision developing a stochastic variational inference for the proposed regression model. In this case, the only “global\" latent variables are the per-topic distributions over words INLINEFORM0 . As for the “local\" latent variables, instead of a single variable INLINEFORM1 , we now have two variables per-document: INLINEFORM2 and INLINEFORM3 . The stochastic variational inference can then be summarized as shown in Algorithm SECREF76 . For added efficiency, one can also perform stochastic updates of the annotators biases INLINEFORM4 and precisions INLINEFORM5 , by taking a step in the direction of the gradient of the noisy evidence lower bound scaled by the step-size INLINEFORM6 .",
"[t] Stochastic variational inference for the proposed regression model [1] Initialize INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 Set t = t + 1 Sample a document INLINEFORM7 uniformly from the corpus Compute INLINEFORM8 using Eq. EQREF64 , for INLINEFORM9 Compute INLINEFORM10 using Eq. EQREF25 Compute INLINEFORM11 using Eq. EQREF66 Compute INLINEFORM12 using Eq. EQREF68 local parameters INLINEFORM13 , INLINEFORM14 and INLINEFORM15 converge Compute step-size INLINEFORM16 Update topics variational parameters DISPLAYFORM0 ",
" global convergence criterion is met"
],
[
"In order to make predictions for a new (unlabeled) document INLINEFORM0 , we start by computing the approximate posterior distribution over the latent variables INLINEFORM1 and INLINEFORM2 . This can be achieved by dropping the terms that involve INLINEFORM3 , INLINEFORM4 and INLINEFORM5 from the model's joint distribution (since, at prediction time, the multi-annotator labels are no longer observed) and averaging over the estimated topics distributions. Letting the topics distribution over words inferred during training be INLINEFORM6 , the joint distribution for a single document is now simply given by DISPLAYFORM0 ",
" Deriving a mean-field variational inference algorithm for computing the posterior over INLINEFORM0 results in the same fixed-point updates as in LDA BIBREF0 for INLINEFORM1 (Eq. EQREF25 ) and INLINEFORM2 DISPLAYFORM0 ",
" Using the inferred posteriors and the coefficients INLINEFORM0 estimated during training, we can make predictions as follows DISPLAYFORM0 ",
" This is equivalent to making predictions in the classification version of sLDA BIBREF2 ."
],
[
"In this section, we develop a variant of the model proposed in Section SECREF4 for regression problems. We shall start by describing the proposed model with a special focus on the how to handle multiple annotators with different biases and reliabilities when the target variables are continuous variables. Next, we present a variational inference algorithm, highlighting the differences to the classification version. Finally, we show how to optimize the model parameters."
],
[
"In this section, the proposed multi-annotator supervised LDA models for classification and regression (MA-sLDAc and MA-sLDAr, respectively) are validated using both simulated annotators on popular corpora and using real multiple-annotator labels obtained from Amazon Mechanical Turk. Namely, we shall consider the following real-world problems: classifying posts and news stories; classifying images according to their content; predicting number of stars that a given user gave to a restaurant based on the review; predicting movie ratings using the text of the reviews."
],
[
"In order to first validate the proposed model for classification problems in a slightly more controlled environment, the well-known 20-Newsgroups benchmark corpus BIBREF29 was used by simulating multiple annotators with different levels of expertise. The 20-Newsgroups consists of twenty thousand messages taken from twenty newsgroups, and is divided in six super-classes, which are, in turn, partitioned in several sub-classes. For this first set of experiments, only the four most populated super-classes were used: “computers\", “science\", “politics\" and “recreative\". The preprocessing of the documents consisted of stemming and stop-words removal. After that, 75% of the documents were randomly selected for training and the remaining 25% for testing.",
"The different annotators were simulated by sampling their answers from a multinomial distribution, where the parameters are given by the lines of the annotators' confusion matrices. Hence, for each annotator INLINEFORM0 , we start by pre-defining a confusion matrix INLINEFORM1 with elements INLINEFORM2 , which correspond to the probability that the annotators' answer is INLINEFORM3 given that the true label is INLINEFORM4 , INLINEFORM5 . Then, the answers are sampled i.i.d. from INLINEFORM6 . This procedure was used to simulate 5 different annotators with the following accuracies: 0.737, 0.468, 0.284, 0.278, 0.260. In this experiment, no repeated labelling was used. Hence, each annotator only labels roughly one-fifth of the data. When compared to the ground truth, the simulated answers revealed an accuracy of 0.405. See Table TABREF81 for an overview of the details of the classification datasets used.",
"Both the batch and the stochastic variational inference (svi) versions of the proposed model (MA-sLDAc) are compared with the following baselines:",
"[itemsep=0.02cm]",
"LDA + LogReg (mv): This baseline corresponds to applying unsupervised LDA to the data, and learning a logistic regression classifier on the inferred topics distributions of the documents. The labels from the different annotators were aggregated using majority voting (mv). Notice that, when there is a single annotator label per instance, majority voting is equivalent to using that label for training. This is the case of the 20-Newsgroups' simulated annotators, but the same does not apply for the experiments in Section UID89 .",
"LDA + Raykar: For this baseline, the model of BIBREF21 was applied using the documents' topic distributions inferred by LDA as features.",
"LDA + Rodrigues: This baseline is similar to the previous one, but uses the model of BIBREF9 instead.",
"Blei 2003 (mv): The idea of this baseline is to replicate a popular state-of-the-art approach for document classification. Hence, the approach of BIBREF0 was used. It consists of applying LDA to extract the documents' topics distributions, which are then used to train a SVM. Similarly to the previous approach, the labels from the different annotators were aggregated using majority voting (mv).",
"sLDA (mv): This corresponds to using the classification version of sLDA BIBREF2 with the labels obtained by performing majority voting (mv) on the annotators' answers.",
"For all the experiments the hyper-parameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 were set using a simple grid search in the collection INLINEFORM3 . The same approach was used to optimize the hyper-parameters of the all the baselines. For the svi algorithm, different mini-batch sizes and forgetting rates INLINEFORM4 were tested. For the 20-Newsgroup dataset, the best results were obtained with a mini-batch size of 500 and INLINEFORM5 . The INLINEFORM6 was kept at 1. The results are shown in Fig. FIGREF87 for different numbers of topics, where we can see that the proposed model outperforms all the baselines, being the svi version the one that performs best.",
"In order to assess the computational advantages of the stochastic variational inference (svi) over the batch algorithm, the log marginal likelihood (or log evidence) was plotted against the number of iterations. Fig. FIGREF88 shows this comparison. Not surprisingly, the svi version converges much faster to higher values of the log marginal likelihood when compared to the batch version, which reflects the efficiency of the svi algorithm.",
"In order to validate the proposed classification model in real crowdsourcing settings, Amazon Mechanical Turk (AMT) was used to obtain labels from multiple annotators for two popular datasets: Reuters-21578 BIBREF30 and LabelMe BIBREF31 .",
"The Reuters-21578 is a collection of manually categorized newswire stories with labels such as Acquisitions, Crude-oil, Earnings or Grain. For this experiment, only the documents belonging to the ModApte split were considered with the additional constraint that the documents should have no more than one label. This resulted in a total of 7016 documents distributed among 8 classes. Of these, 1800 documents were submitted to AMT for multiple annotators to label, giving an average of approximately 3 answers per document (see Table TABREF81 for further details). The remaining 5216 documents were used for testing. The collected answers yield an average worker accuracy of 56.8%. Applying majority voting to these answers reveals a ground truth accuracy of 71.0%. Fig. FIGREF90 shows the boxplots of the number of answers per worker and their accuracies. Observe how applying majority voting yields a higher accuracy than the median accuracy of the workers.",
"The results obtained by the different approaches are given in Fig. FIGREF91 , where it can be seen that the proposed model (MA-sLDAc) outperforms all the other approaches. For this dataset, the svi algorithm is using mini-batches of 300 documents.",
"The proposed model was also validated using a dataset from the computer vision domain: LabelMe BIBREF31 . In contrast to the Reuters and Newsgroups corpora, LabelMe is an open online tool to annotate images. Hence, this experiment allows us to see how the proposed model generalizes beyond non-textual data. Using the Matlab interface provided in the projects' website, we extracted a subset of the LabelMe data, consisting of all the 256 x 256 images with the categories: “highway\", “inside city\", “tall building\", “street\", “forest\", “coast\", “mountain\" or “open country\". This allowed us to collect a total of 2688 labeled images. Of these, 1000 images were given to AMT workers to classify with one of the classes above. Each image was labeled by an average of 2.547 workers, with a mean accuracy of 69.2%. When majority voting is applied to the collected answers, a ground truth accuracy of 76.9% is obtained. Fig. FIGREF92 shows the boxplots of the number of answers per worker and their accuracies. Interestingly, the worker accuracies are much higher and their distribution is much more concentrated than on the Reuters-21578 data (see Fig. FIGREF90 ), which suggests that this is an easier task for the AMT workers.",
"The preprocessing of the images used is similar to the approach in BIBREF1 . It uses 128-dimensional SIFT BIBREF32 region descriptors selected by a sliding grid spaced at one pixel. This sliding grid extracts local regions of the image with sizes uniformly sampled between 16 x 16 and 32 x 32 pixels. The 128-dimensional SIFT descriptors produced by the sliding window are then fed to a k-means algorithm (with k=200) in order construct a vocabulary of 200 “visual words\". This allows us to represent the images with a bag of visual words model.",
"With the purpose of comparing the proposed model with a popular state-of-the-art approach for image classification, for the LabelMe dataset, the following baseline was introduced:",
"Bosch 2006 (mv): This baseline is similar to one in BIBREF33 . The authors propose the use of pLSA to extract the latent topics, and the use of k-nearest neighbor (kNN) classifier using the documents' topics distributions. For this baseline, unsupervised LDA is used instead of pLSA, and the labels from the different annotators for kNN (with INLINEFORM0 ) are aggregated using majority voting (mv).",
"The results obtained by the different approaches for the LabelMe data are shown in Fig. FIGREF94 , where the svi version is using mini-batches of 200 documents.",
"Analyzing the results for the Reuters-21578 and LabelMe data, we can observe that MA-sLDAc outperforms all the baselines, with slightly better accuracies for the batch version, especially in the Reuters data. Interestingly, the second best results are consistently obtained by the multi-annotator approaches, which highlights the need for accounting for the noise and biases of the answers of the different annotators.",
"In order to verify that the proposed model was estimating the (normalized) confusion matrices INLINEFORM0 of the different workers correctly, a random sample of them was plotted against the true confusion matrices (i.e. the normalized confusion matrices evaluated against the true labels). Figure FIGREF95 shows the results obtained with 60 topics on the Reuters-21578 dataset, where the color intensity of the cells increases with the magnitude of the value of INLINEFORM1 (the supplementary material provides a similar figure for the LabelMe dataset). Using this visualization we can verify that the AMT workers are quite heterogeneous in their labeling styles and in the kind of mistakes they make, with several workers showing clear biases (e.g. workers 3 and 4), while others made mistakes more randomly (e.g. worker 1). Nevertheless, the proposed is able to capture these patterns correctly and account for effect.",
"To gain further insights, Table TABREF96 shows 4 example images from the LabelMe dataset, along with their true labels, the answers provided by the different workers, the true label inferred by the proposed model and the likelihood of the different possible answers given the true label for each annotator ( INLINEFORM0 for INLINEFORM1 ) using a color-coding scheme similar to Fig. FIGREF95 . In the first example, although majority voting suggests “inside city\" to be the correct label, we can see that the model has learned that annotators 32 and 43 are very likely to provide the label “inside city\" when the true label is actually “street\", and it is able to leverage that fact to infer that the correct label is “street\". Similarly, in the second image the model is able to infer the correct true label from 3 conflicting labels. However, in the third image the model is not able to recover the correct true class, which can be explained by it not having enough evidence about the annotators and their reliabilities and biases (likelihood distribution for these cases is uniform). In fact, this raises interesting questions regarding requirements for the minimum number of labels per annotator, their reliabilities and their coherence. Finally, for the fourth image, somehow surprisingly, the model is able to infer the correct true class, even though all 3 annotators labeled it as “inside city\"."
],
[
"As for proposed classification model, we start by validating MA-sLDAr using simulated annotators on a popular corpus where the documents have associated targets that we wish to predict. For this purpose, we shall consider a dataset of user-submitted restaurant reviews from the website we8there.com. This dataset was originally introduced in BIBREF34 and it consists of 6260 reviews. For each review, there is a five-star rating on four specific aspects of quality (food, service, value, and atmosphere) as well as the overall experience. Our goal is then to predict the overall experience of the user based on his comments in the review. We apply the same preprocessing as in BIBREF18 , which consists in tokenizing the text into bigrams and discarding those that appear in less than ten reviews. The preprocessing of the documents consisted of stemming and stop-words removal. After that, 75% of the documents were randomly selected for training and the remaining 25% for testing.",
"As with the classification model, we seek to simulate an heterogeneous set of annotators in terms of reliability and bias. Hence, in order to simulate an annotator INLINEFORM0 , we proceed as follows: let INLINEFORM1 be the true review of the restaurant; we start by assigning a given bias INLINEFORM2 and precision INLINEFORM3 to the reviewers, depending on what type of annotator we wish to simulate (see Fig. FIGREF45 ); we then sample a simulated answer as INLINEFORM4 . Using this procedure, we simulated 5 annotators with the following (bias, precision) pairs: (0.1, 10), (-0.3, 3), (-2.5, 10), (0.1, 0.5) and (1, 0.25). The goal is to have 2 good annotators (low bias, high precision), 1 highly biased annotator and 2 low precision annotators where one is unbiased and the other is reasonably biased. The coefficients of determination ( INLINEFORM5 ) of the simulated annotators are: [0.940, 0.785, -2.469, -0.131, -1.749]. Computing the mean of the answers of the different annotators yields a INLINEFORM6 of 0.798. Table TABREF99 gives an overview on the statistics of datasets used in the regression experiments.",
"We compare the proposed model (MA-sLDAr) with the two following baselines:",
"[itemsep=0.02cm]",
"LDA + LinReg (mean): This baseline corresponds to applying unsupervised LDA to the data, and learning a linear regression model on the inferred topics distributions of the documents. The answers from the different annotators were aggregated by computing the mean.",
"sLDA (mean): This corresponds to using the regression version of sLDA BIBREF6 with the target variables obtained by computing the mean of the annotators' answers.",
"Fig. FIGREF102 shows the results obtained for different numbers of topics. Do to the stochastic nature of both the annotators simulation procedure and the initialization of the variational Bayesian EM algorithm, we repeated each experiment 30 times and report the average INLINEFORM0 obtained with the corresponding standard deviation. Since the regression datasets that are considered in this article are not large enough to justify the use of a stochastic variational inference (svi) algorithm, we only made experiments using the batch algorithm developed in Section SECREF61 . The results obtained clearly show the improved performance of MA-sLDAr over the other methods.",
"The proposed multi-annotator regression model (MA-sLDAr) was also validated with real annotators by using AMT. For that purpose, the movie review dataset from BIBREF35 was used. This dataset consists of 5006 movie reviews along with their respective star rating (from 1 to 10). The goal of this experiment is then predict how much a person liked a movie based on what she says about it. We ask workers to guess how much they think the writer of the review liked the movie based on her comments. An average of 4.96 answers per-review was collected for a total of 1500 reviews. The remaining reviews were used for testing. In average, each worker rated approximately 55 reviews. Using the mean answer as an estimate of the true rating of the movie yields a INLINEFORM0 of 0.830. Table TABREF99 gives an overview of the statistics of this data. Fig. FIGREF104 shows boxplots of the number of answers per worker, as well as boxplots of their respective biases ( INLINEFORM1 ) and variances (inverse precisions, INLINEFORM2 ).",
"The preprocessing of the text consisted of stemming and stop-words removal. Using the preprocessed data, the proposed MA-sLDAr model was compared with the same baselines that were used with the we8there dataset in Section UID98 . Fig. FIGREF105 shows the results obtained for different numbers of topics. These results show that the proposed model outperforms all the other baselines.",
"With the purpose of verifying that the proposed model is indeed estimating the biases and precisions of the different workers correctly, we plotted the true values against the estimates of MA-sLDAr with 60 topics for a random subset of 10 workers. Fig. FIGREF106 shows the obtained results, where higher color intensities indicate higher values. Ideally, the colour of two horizontally-adjacent squares would then be of similar shades, and this is indeed what happens in practice for the majority of the workers, as Fig. FIGREF106 shows. Interestingly, the figure also shows that there are a couple of workers that are considerably biased (e.g. workers 6 and 8) and that those biases are being correctly estimated, thus justifying the inclusion of a bias parameter in the proposed model, which contrasts with previous works BIBREF21 , BIBREF23 ."
],
[
"This article proposed a supervised topic model that is able to learn from multiple annotators and crowds, by accounting for their biases and different levels of expertise. Given the large sizes of modern datasets, and considering that the majority of the tasks for which crowdsourcing and multiple annotators are desirable candidates, generally involve complex high-dimensional data such as text and images, the proposed model constitutes a strong contribution for the multi-annotator paradigm. This model is then capable of jointly modeling the words in documents as arising from a mixture of topics, as well as the latent true target variables and the (noisy) answers of the multiple annotators. We developed two distinct models, one for classification and another for regression, which share similar intuitions but that inevitably differ due to the nature of the target variables. We empirically showed, using both simulated and real annotators from Amazon Mechanical Turk that the proposed model is able to outperform state-of-the-art approaches in several real-world problems, such as classifying posts, news stories and images, or predicting the number of stars of restaurant and the rating of movie based on their reviews. For this, we use various popular datasets from the state-of-the-art, that are commonly used for benchmarking machine learning algorithms. Finally, an efficient stochastic variational inference algorithm was described, which gives the proposed models the ability to scale to large datasets."
],
[
"The Fundação para a Ciência e Tecnologia (FCT) is gratefully acknowledged for founding this work with the grants SFRH/BD/78396/2011 and PTDC/ECM-TRA/1898/2012 (InfoCROWDS).",
"[]Mariana Lourenço has a MSc degree in Informatics Engineering from University of Coimbra, Portugal. Her thesis presented a supervised topic model that is able to learn from crowds and she took part in a research project whose primary objective was to exploit online information about public events to build predictive models of flows of people in the city. Her main research interests are machine learning, pattern recognition and natural language processing.",
"[]Bernardete Ribeiro is Associate Professor at the Informatics Engineering Department, University of Coimbra in Portugal, from where she received a D.Sc. in Informatics Engineering, a Ph.D. in Electrical Engineering, speciality of Informatics, and a MSc in Computer Science. Her research interests are in the areas of Machine Learning, Pattern Recognition and Signal Processing and their applications to a broad range of fields. She was responsible/participated in several research projects in a wide range of application areas such as Text Classification, Financial, Biomedical and Bioinformatics. Bernardete Ribeiro is IEEE Senior Member, and member of IARP International Association of Pattern Recognition and ACM.",
"[]Francisco C. Pereira is Full Professor at the Technical University of Denmark (DTU), where he leads the Smart Mobility research group. His main research focus is on applying machine learning and pattern recognition to the context of transportation systems with the purpose of understanding and predicting mobility behavior, and modeling and optimizing the transportation system as a whole. He has Master€™s (2000) and Ph.D. (2005) degrees in Computer Science from University of Coimbra, and has authored/co-authored over 70 journal and conference papers in areas such as pattern recognition, transportation, knowledge based systems and cognitive science. Francisco was previously Research Scientist at MIT and Assistant Professor in University of Coimbra. He was awarded several prestigious prizes, including an IEEE Achievements award, in 2009, the Singapore GYSS Challenge in 2013, and the Pyke Johnson award from Transportation Research Board, in 2015."
]
]
} | {
"question": [
"what are the advantages of the proposed model?",
"what are the state of the art approaches?",
"what datasets were used?"
],
"question_id": [
"330f2cdeab689670b68583fc4125f5c0b26615a8",
"c87b2dd5c439d5e68841a705dd81323ec0d64c97",
"f7789313a804e41fcbca906a4e5cf69039eeef9f"
],
"nlp_background": [
"",
"",
""
],
"topic_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"search_query": [
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"he proposed model outperforms all the baselines, being the svi version the one that performs best.",
"the svi version converges much faster to higher values of the log marginal likelihood when compared to the batch version, which reflects the efficiency of the svi algorithm."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"For all the experiments the hyper-parameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 were set using a simple grid search in the collection INLINEFORM3 . The same approach was used to optimize the hyper-parameters of the all the baselines. For the svi algorithm, different mini-batch sizes and forgetting rates INLINEFORM4 were tested. For the 20-Newsgroup dataset, the best results were obtained with a mini-batch size of 500 and INLINEFORM5 . The INLINEFORM6 was kept at 1. The results are shown in Fig. FIGREF87 for different numbers of topics, where we can see that the proposed model outperforms all the baselines, being the svi version the one that performs best.",
"In order to assess the computational advantages of the stochastic variational inference (svi) over the batch algorithm, the log marginal likelihood (or log evidence) was plotted against the number of iterations. Fig. FIGREF88 shows this comparison. Not surprisingly, the svi version converges much faster to higher values of the log marginal likelihood when compared to the batch version, which reflects the efficiency of the svi algorithm."
],
"highlighted_evidence": [
"The results are shown in Fig. FIGREF87 for different numbers of topics, where we can see that the proposed model outperforms all the baselines, being the svi version the one that performs best.",
"In order to assess the computational advantages of the stochastic variational inference (svi) over the batch algorithm, the log marginal likelihood (or log evidence) was plotted against the number of iterations. Fig. FIGREF88 shows this comparison. Not surprisingly, the svi version converges much faster to higher values of the log marginal likelihood when compared to the batch version, which reflects the efficiency of the svi algorithm."
]
}
],
"annotation_id": [
"84967f1062c396a7337649b5304d998703e93fef"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Bosch 2006 (mv)",
"LDA + LogReg (mv)",
"LDA + Raykar",
"LDA + Rodrigues",
"Blei 2003 (mv)",
"sLDA (mv)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"With the purpose of comparing the proposed model with a popular state-of-the-art approach for image classification, for the LabelMe dataset, the following baseline was introduced:",
"Bosch 2006 (mv): This baseline is similar to one in BIBREF33 . The authors propose the use of pLSA to extract the latent topics, and the use of k-nearest neighbor (kNN) classifier using the documents' topics distributions. For this baseline, unsupervised LDA is used instead of pLSA, and the labels from the different annotators for kNN (with INLINEFORM0 ) are aggregated using majority voting (mv).",
"The results obtained by the different approaches for the LabelMe data are shown in Fig. FIGREF94 , where the svi version is using mini-batches of 200 documents.",
"Analyzing the results for the Reuters-21578 and LabelMe data, we can observe that MA-sLDAc outperforms all the baselines, with slightly better accuracies for the batch version, especially in the Reuters data. Interestingly, the second best results are consistently obtained by the multi-annotator approaches, which highlights the need for accounting for the noise and biases of the answers of the different annotators.",
"Both the batch and the stochastic variational inference (svi) versions of the proposed model (MA-sLDAc) are compared with the following baselines:",
"[itemsep=0.02cm]",
"LDA + LogReg (mv): This baseline corresponds to applying unsupervised LDA to the data, and learning a logistic regression classifier on the inferred topics distributions of the documents. The labels from the different annotators were aggregated using majority voting (mv). Notice that, when there is a single annotator label per instance, majority voting is equivalent to using that label for training. This is the case of the 20-Newsgroups' simulated annotators, but the same does not apply for the experiments in Section UID89 .",
"LDA + Raykar: For this baseline, the model of BIBREF21 was applied using the documents' topic distributions inferred by LDA as features.",
"LDA + Rodrigues: This baseline is similar to the previous one, but uses the model of BIBREF9 instead.",
"Blei 2003 (mv): The idea of this baseline is to replicate a popular state-of-the-art approach for document classification. Hence, the approach of BIBREF0 was used. It consists of applying LDA to extract the documents' topics distributions, which are then used to train a SVM. Similarly to the previous approach, the labels from the different annotators were aggregated using majority voting (mv).",
"sLDA (mv): This corresponds to using the classification version of sLDA BIBREF2 with the labels obtained by performing majority voting (mv) on the annotators' answers."
],
"highlighted_evidence": [
"With the purpose of comparing the proposed model with a popular state-of-the-art approach for image classification, for the LabelMe dataset, the following baseline was introduced:\n\nBosch 2006 (mv): This baseline is similar to one in BIBREF33 . The authors propose the use of pLSA to extract the latent topics, and the use of k-nearest neighbor (kNN) classifier using the documents' topics distributions. For this baseline, unsupervised LDA is used instead of pLSA, and the labels from the different annotators for kNN (with INLINEFORM0 ) are aggregated using majority voting (mv).\n\nThe results obtained by the different approaches for the LabelMe data are shown in Fig. FIGREF94 , where the svi version is using mini-batches of 200 documents.\n\nAnalyzing the results for the Reuters-21578 and LabelMe data, we can observe that MA-sLDAc outperforms all the baselines, with slightly better accuracies for the batch version, especially in the Reuters data. Interestingly, the second best results are consistently obtained by the multi-annotator approaches, which highlights the need for accounting for the noise and biases of the answers of the different annotators.",
"Both the batch and the stochastic variational inference (svi) versions of the proposed model (MA-sLDAc) are compared with the following baselines:\n\n[itemsep=0.02cm]\n\nLDA + LogReg (mv): This baseline corresponds to applying unsupervised LDA to the data, and learning a logistic regression classifier on the inferred topics distributions of the documents. The labels from the different annotators were aggregated using majority voting (mv). Notice that, when there is a single annotator label per instance, majority voting is equivalent to using that label for training. This is the case of the 20-Newsgroups' simulated annotators, but the same does not apply for the experiments in Section UID89 .\n\nLDA + Raykar: For this baseline, the model of BIBREF21 was applied using the documents' topic distributions inferred by LDA as features.\n\nLDA + Rodrigues: This baseline is similar to the previous one, but uses the model of BIBREF9 instead.\n\nBlei 2003 (mv): The idea of this baseline is to replicate a popular state-of-the-art approach for document classification. Hence, the approach of BIBREF0 was used. It consists of applying LDA to extract the documents' topics distributions, which are then used to train a SVM. Similarly to the previous approach, the labels from the different annotators were aggregated using majority voting (mv).\n\nsLDA (mv): This corresponds to using the classification version of sLDA BIBREF2 with the labels obtained by performing majority voting (mv) on the annotators' answers."
]
}
],
"annotation_id": [
"f9f64e8dcd09a7ad61a46512180dec3915bf9d23"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Reuters-21578 BIBREF30",
" LabelMe BIBREF31",
"20-Newsgroups benchmark corpus BIBREF29 "
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In order to validate the proposed classification model in real crowdsourcing settings, Amazon Mechanical Turk (AMT) was used to obtain labels from multiple annotators for two popular datasets: Reuters-21578 BIBREF30 and LabelMe BIBREF31 .",
"In order to first validate the proposed model for classification problems in a slightly more controlled environment, the well-known 20-Newsgroups benchmark corpus BIBREF29 was used by simulating multiple annotators with different levels of expertise. The 20-Newsgroups consists of twenty thousand messages taken from twenty newsgroups, and is divided in six super-classes, which are, in turn, partitioned in several sub-classes. For this first set of experiments, only the four most populated super-classes were used: “computers\", “science\", “politics\" and “recreative\". The preprocessing of the documents consisted of stemming and stop-words removal. After that, 75% of the documents were randomly selected for training and the remaining 25% for testing."
],
"highlighted_evidence": [
"In order to validate the proposed classification model in real crowdsourcing settings, Amazon Mechanical Turk (AMT) was used to obtain labels from multiple annotators for two popular datasets: Reuters-21578 BIBREF30 and LabelMe BIBREF31 .",
"In order to first validate the proposed model for classification problems in a slightly more controlled environment, the well-known 20-Newsgroups benchmark corpus BIBREF29 was used by simulating multiple annotators with different levels of expertise. "
]
},
{
"unanswerable": false,
"extractive_spans": [
" 20-Newsgroups benchmark corpus ",
"Reuters-21578",
"LabelMe"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In order to first validate the proposed model for classification problems in a slightly more controlled environment, the well-known 20-Newsgroups benchmark corpus BIBREF29 was used by simulating multiple annotators with different levels of expertise. The 20-Newsgroups consists of twenty thousand messages taken from twenty newsgroups, and is divided in six super-classes, which are, in turn, partitioned in several sub-classes. For this first set of experiments, only the four most populated super-classes were used: “computers\", “science\", “politics\" and “recreative\". The preprocessing of the documents consisted of stemming and stop-words removal. After that, 75% of the documents were randomly selected for training and the remaining 25% for testing.",
"In order to validate the proposed classification model in real crowdsourcing settings, Amazon Mechanical Turk (AMT) was used to obtain labels from multiple annotators for two popular datasets: Reuters-21578 BIBREF30 and LabelMe BIBREF31 ."
],
"highlighted_evidence": [
"In order to first validate the proposed model for classification problems in a slightly more controlled environment, the well-known 20-Newsgroups benchmark corpus BIBREF29 was used by simulating multiple annotators with different levels of expertise. ",
"In order to validate the proposed classification model in real crowdsourcing settings, Amazon Mechanical Turk (AMT) was used to obtain labels from multiple annotators for two popular datasets: Reuters-21578 BIBREF30 and LabelMe BIBREF31 ."
]
}
],
"annotation_id": [
"73977b589a798aea12cbc3499307cd06109f130b",
"d051f08f29f9ec9b662c02c7f4af884215a18a3a"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
} | {
"caption": [
"Fig. 1. Graphical representation of the proposed model for classification.",
"TABLE 1 Correspondence Between Variational Parameters and the Original Parameters",
"Fig. 3. Graphical representation of the proposed model for regression.",
"Fig. 2. Example of four different annotators (represented by different colours) with different biases and precisions.",
"TABLE 2 Overall Statistics of the Classification Datasets Used in the Experiments",
"Fig. 5. Comparison of the log marginal likelihood between the batch and the stochastic variational inference (svi) algorithms on the 20-newsgroups corpus.",
"Fig. 4. Average testset accuracy (over five runs; stddev.) of the different approaches on the 20-newsgroups data.",
"Fig. 6. Boxplot of the number of answers per worker (a) and their respective accuracies (b) for the reuters dataset.",
"Fig. 7. Average testset accuracy (over 30 runs; stddev.) of the different approaches on the reuters data.",
"Fig. 9. Average testset accuracy (over 30 runs; stddev.) of the different approaches on the LabelMe data.",
"Fig. 8. Boxplot of the number of answers per worker (a) and trespective accuracies (b) for the LabelMe dataset.",
"Fig. 10. True versus estimated confusion matrix (cm) of six different workers of the reuters-21,578 dataset.",
"Fig. 11. Average testset R2 (over 30 runs; stddev.) of the different approaches on the we8there data.",
"TABLE 3 Results for Four Example LabelMe Images",
"Fig. 14. True versus predicted biases and precisions of 10 random workers of the movie reviews dataset.",
"Fig. 12. Boxplot of the number of answers per worker (a) and their respective biases (b) and variances (c) for the movie reviews dataset.",
"Fig. 13. Average testset R2 (over 30 runs; stddev.) of the different approaches on the movie reviews data."
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"6-Figure3-1.png",
"6-Figure2-1.png",
"9-Table2-1.png",
"9-Figure5-1.png",
"9-Figure4-1.png",
"9-Figure6-1.png",
"10-Figure7-1.png",
"10-Figure9-1.png",
"10-Figure8-1.png",
"11-Figure10-1.png",
"12-Figure11-1.png",
"12-Table3-1.png",
"13-Figure14-1.png",
"13-Figure12-1.png",
"13-Figure13-1.png"
]
} |
2002.11893 | CrossWOZ: A Large-Scale Chinese Cross-Domain Task-Oriented Dialogue Dataset | To advance multi-domain (cross-domain) dialogue modeling as well as alleviate the shortage of Chinese task-oriented datasets, we propose CrossWOZ, the first large-scale Chinese Cross-Domain Wizard-of-Oz task-oriented dataset. It contains 6K dialogue sessions and 102K utterances for 5 domains, including hotel, restaurant, attraction, metro, and taxi. Moreover, the corpus contains rich annotation of dialogue states and dialogue acts at both user and system sides. About 60% of the dialogues have cross-domain user goals that favor inter-domain dependency and encourage natural transition across domains in conversation. We also provide a user simulator and several benchmark models for pipelined task-oriented dialogue systems, which will facilitate researchers to compare and evaluate their models on this corpus. The large size and rich annotation of CrossWOZ make it suitable to investigate a variety of tasks in cross-domain dialogue modeling, such as dialogue state tracking, policy learning, user simulation, etc. | {
"section_name": [
"Introduction",
"Related Work",
"Data Collection",
"Data Collection ::: Database Construction",
"Data Collection ::: Goal Generation",
"Data Collection ::: Dialogue Collection",
"Data Collection ::: Dialogue Collection ::: User Side",
"Data Collection ::: Dialogue Collection ::: Wizard Side",
"Data Collection ::: Dialogue Annotation",
"Statistics",
"Corpus Features",
"Benchmark and Analysis",
"Benchmark and Analysis ::: Natural Language Understanding",
"Benchmark and Analysis ::: Dialogue State Tracking",
"Benchmark and Analysis ::: Dialogue Policy Learning",
"Benchmark and Analysis ::: Natural Language Generation",
"Benchmark and Analysis ::: User Simulator",
"Benchmark and Analysis ::: Evaluation with User Simulation",
"Conclusion",
"Acknowledgments"
],
"paragraphs": [
[
"Recently, there have been a variety of task-oriented dialogue models thanks to the prosperity of neural architectures BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. However, the research is still largely limited by the availability of large-scale high-quality dialogue data. Many corpora have advanced the research of task-oriented dialogue systems, most of which are single domain conversations, including ATIS BIBREF6, DSTC 2 BIBREF7, Frames BIBREF8, KVRET BIBREF9, WOZ 2.0 BIBREF10 and M2M BIBREF11.",
"Despite the significant contributions to the community, these datasets are still limited in size, language variation, or task complexity. Furthermore, there is a gap between existing dialogue corpora and real-life human dialogue data. In real-life conversations, it is natural for humans to transition between different domains or scenarios while still maintaining coherent contexts. Thus, real-life dialogues are much more complicated than those dialogues that are only simulated within a single domain. To address this issue, some multi-domain corpora have been proposed BIBREF12, BIBREF13. The most notable corpus is MultiWOZ BIBREF12, a large-scale multi-domain dataset which consists of crowdsourced human-to-human dialogues. It contains 10K dialogue sessions and 143K utterances for 7 domains, with annotation of system-side dialogue states and dialogue acts. However, the state annotations are noisy BIBREF14, and user-side dialogue acts are missing. The dependency across domains is simply embodied in imposing the same pre-specified constraints on different domains, such as requiring both a hotel and an attraction to locate in the center of the town.",
"In comparison to the abundance of English dialogue data, surprisingly, there is still no widely recognized Chinese task-oriented dialogue corpus. In this paper, we propose CrossWOZ, a large-scale Chinese multi-domain (cross-domain) task-oriented dialogue dataset. An dialogue example is shown in Figure FIGREF1. We compare CrossWOZ to other corpora in Table TABREF5 and TABREF6. Our dataset has the following features comparing to other corpora (particularly MultiWOZ BIBREF12):",
"The dependency between domains is more challenging because the choice in one domain will affect the choices in related domains in CrossWOZ. As shown in Figure FIGREF1 and Table TABREF6, the hotel must be near the attraction chosen by the user in previous turns, which requires more accurate context understanding.",
"It is the first Chinese corpus that contains large-scale multi-domain task-oriented dialogues, consisting of 6K sessions and 102K utterances for 5 domains (attraction, restaurant, hotel, metro, and taxi).",
"Annotation of dialogue states and dialogue acts is provided for both the system side and user side. The annotation of user states enables us to track the conversation from the user's perspective and can empower the development of more elaborate user simulators.",
"In this paper, we present the process of dialogue collection and provide detailed data analysis of the corpus. Statistics show that our cross-domain dialogues are complicated. To facilitate model comparison, benchmark models are provided for different modules in pipelined task-oriented dialogue systems, including natural language understanding, dialogue state tracking, dialogue policy learning, and natural language generation. We also provide a user simulator, which will facilitate the development and evaluation of dialogue models on this corpus. The corpus and the benchmark models are publicly available at https://github.com/thu-coai/CrossWOZ."
],
[
"According to whether the dialogue agent is human or machine, we can group the collection methods of existing task-oriented dialogue datasets into three categories. The first one is human-to-human dialogues. One of the earliest and well-known ATIS dataset BIBREF6 used this setting, followed by BIBREF8, BIBREF9, BIBREF10, BIBREF15, BIBREF16 and BIBREF12. Though this setting requires many human efforts, it can collect natural and diverse dialogues. The second one is human-to-machine dialogues, which need a ready dialogue system to converse with humans. The famous Dialogue State Tracking Challenges provided a set of human-to-machine dialogue data BIBREF17, BIBREF7. The performance of the dialogue system will largely influence the quality of dialogue data. The third one is machine-to-machine dialogues. It needs to build both user and system simulators to generate dialogue outlines, then use templates BIBREF3 to generate dialogues or further employ people to paraphrase the dialogues to make them more natural BIBREF11, BIBREF13. It needs much less human effort. However, the complexity and diversity of dialogue policy are limited by the simulators. To explore dialogue policy in multi-domain scenarios, and to collect natural and diverse dialogues, we resort to the human-to-human setting.",
"Most of the existing datasets only involve single domain in one dialogue, except MultiWOZ BIBREF12 and Schema BIBREF13. MultiWOZ dataset has attracted much attention recently, due to its large size and multi-domain characteristics. It is at least one order of magnitude larger than previous datasets, amounting to 8,438 dialogues and 115K turns in the training set. It greatly promotes the research on multi-domain dialogue modeling, such as policy learning BIBREF18, state tracking BIBREF19, and context-to-text generation BIBREF20. Recently the Schema dataset is collected in a machine-to-machine fashion, resulting in 16,142 dialogues and 330K turns for 16 domains in the training set. However, the multi-domain dependency in these two datasets is only embodied in imposing the same pre-specified constraints on different domains, such as requiring a restaurant and an attraction to locate in the same area, or the city of a hotel and the destination of a flight to be the same (Table TABREF6).",
"Table TABREF5 presents a comparison between our dataset with other task-oriented datasets. In comparison to MultiWOZ, our dataset has a comparable scale: 5,012 dialogues and 84K turns in the training set. The average number of domains and turns per dialogue are larger than those of MultiWOZ, which indicates that our task is more complex. The cross-domain dependency in our dataset is natural and challenging. For example, as shown in Table TABREF6, the system needs to recommend a hotel near the attraction chosen by the user in previous turns. Thus, both system recommendation and user selection will dynamically impact the dialogue. We also allow the same domain to appear multiple times in a user goal since a tourist may want to go to more than one attraction.",
"To better track the conversation flow and model user dialogue policy, we provide annotation of user states in addition to system states and dialogue acts. While the system state tracks the dialogue history, the user state is maintained by the user and indicates whether the sub-goals have been completed, which can be used to predict user actions. This information will facilitate the construction of the user simulator.",
"To the best of our knowledge, CrossWOZ is the first large-scale Chinese dataset for task-oriented dialogue systems, which will largely alleviate the shortage of Chinese task-oriented dialogue corpora that are publicly available."
],
[
"Our corpus is to simulate scenarios where a traveler seeks tourism information and plans her or his travel in Beijing. Domains include hotel, attraction, restaurant, metro, and taxi. The data collection process is summarized as below:",
"Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. For the taxi domain, there is no need to store the information. Instead, we can call the API directly if necessary.",
"Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context. To make workers understand the task more easily, we crafted templates to generate natural language descriptions for each structured goal.",
"Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.",
"Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances."
],
[
"We collected 465 attractions, 951 restaurants, and 1,133 hotels in Beijing from the Web. Some statistics are shown in Table TABREF11. There are three types of slots for each entity: common slots such as name and address; binary slots for hotel services such as wake-up call; nearby attractions/restaurants/hotels slots that contain nearby entities in the attraction, restaurant, and hotel domains. Since it is not usual to find another nearby hotel in the hotel domain, we did not collect such information. This nearby relation allows us to generate natural cross-domain goals, such as \"find another attraction near the first one\" and \"find a restaurant near the attraction\". Nearest metro stations of HAR entities form the metro database. In contrast, we provided the pseudo car type and plate number for the taxi domain."
],
[
"To avoid generating overly complex goals, each goal has at most five sub-goals. To generate more natural goals, the sub-goals can be of the same domain, such as two attractions near each other. The goal is represented as a list of (sub-goal id, domain, slot, value) tuples, named as semantic tuples. The sub-goal id is used to distinguish sub-goals which may be in the same domain. There are two types of slots: informable slots which are the constraints that the user needs to inform the system, and requestable slots which are the information that the user needs to inquire from the system. As shown in Table TABREF13, besides common informable slots (italic values) whose values are determined before the conversation, we specially design cross-domain informable slots (bold values) whose values refer to other sub-goals. Cross-domain informable slots utilize sub-goal id to connect different sub-goals. Thus the actual constraints vary according to the different contexts instead of being pre-specified. The values of common informable slots are sampled randomly from the database. Based on the informable slots, users are required to gather the values of requestable slots (blank values in Table TABREF13) through conversation.",
"There are four steps in goal generation. First, we generate independent sub-goals in HAR domains. For each domain in HAR domains, with the same probability $\\mathcal {P}$ we generate a sub-goal, while with the probability of $1-\\mathcal {P}$ we do not generate any sub-goal for this domain. Each sub-goal has common informable slots and requestable slots. As shown in Table TABREF15, all slots of HAR domains can be requestable slots, while the slots with an asterisk can be common informable slots.",
"Second, we generate cross-domain sub-goals in HAR domains. For each generated sub-goal (e.g., the attraction sub-goal in Table TABREF13), if its requestable slots contain \"nearby hotels\", we generate an additional sub-goal in the hotel domain (e.g., the hotel sub-goal in Table TABREF13) with the probability of $\\mathcal {P}_{attraction\\rightarrow hotel}$. Of course, the selected hotel must satisfy the nearby relation to the attraction entity. Similarly, we do not generate any additional sub-goal in the hotel domain with the probability of $1-\\mathcal {P}_{attraction\\rightarrow hotel}$. This also works for the attraction and restaurant domains. $\\mathcal {P}_{hotel\\rightarrow hotel}=0$ since we do not allow the user to find the nearby hotels of one hotel.",
"Third, we generate sub-goals in the metro and taxi domains. With the probability of $\\mathcal {P}_{taxi}$, we generate a sub-goal in the taxi domain (e.g., the taxi sub-goal in Table TABREF13) to commute between two entities of HAR domains that are already generated. It is similar for the metro domain and we set $\\mathcal {P}_{metro}=\\mathcal {P}_{taxi}$. All slots in the metro or taxi domain appear in the sub-goals and must be filled. As shown in Table TABREF15, from and to slots are always cross-domain informable slots, while others are always requestable slots.",
"Last, we rearrange the order of the sub-goals to generate more natural and logical user goals. We require that a sub-goal should be followed by its referred sub-goal as immediately as possible.",
"To make the workers aware of this cross-domain feature, we additionally provide a task description for each user goal in natural language, which is generated from the structured goal by hand-crafted templates.",
"Compared with the goals whose constraints are all pre-specified, our goals impose much more dependency between different domains, which will significantly influence the conversation. The exact values of cross-domain informable slots are finally determined according to the dialogue context."
],
[
"We developed a specialized website that allows two workers to converse synchronously and make annotations online. On the website, workers are free to choose one of the two roles: tourist (user) or system (wizard). Then, two paired workers are sent to a chatroom. The user needs to accomplish the allocated goal through conversation while the wizard searches the database to provide the necessary information and gives responses. Before the formal data collection, we trained the workers to complete a small number of dialogues by giving them feedback. Finally, 90 well-trained workers are participating in the data collection.",
"In contrast, MultiWOZ BIBREF12 hired more than a thousand workers to converse asynchronously. Each worker received a dialogue context to review and need to respond for only one turn at a time. The collected dialogues may be incoherent because workers may not understand the context correctly and multiple workers contributed to the same dialogue session, possibly leading to more variance in the data quality. For example, some workers expressed two mutually exclusive constraints in two consecutive user turns and failed to eliminate the system's confusion in the next several turns. Compared with MultiWOZ, our synchronous conversation setting may produce more coherent dialogues."
],
[
"The user state is the same as the user goal before a conversation starts. At each turn, the user needs to 1) modify the user state according to the system response at the preceding turn, 2) select some semantic tuples in the user state, which indicates the dialogue acts, and 3) compose the utterance according to the selected semantic tuples. In addition to filling the required values and updating cross-domain informable slots with real values in the user state, the user is encouraged to modify the constraints when there is no result under such constraints. The change will also be recorded in the user state. Once the goal is completed (all the values in the user state are filled), the user can terminate the dialogue."
],
[
"We regard the database query as the system state, which records the constraints of each domain till the current turn. At each turn, the wizard needs to 1) fill the query according to the previous user response and search the database if necessary, 2) select the retrieved entities, and 3) respond in natural language based on the information of the selected entities. If none of the entities satisfy all the constraints, the wizard will try to relax some of them for a recommendation, resulting in multiple queries. The first query records original user constraints while the last one records the constraints relaxed by the system."
],
[
"After collecting the conversation data, we used some rules to annotate dialogue acts automatically. Each utterance can have several dialogue acts. Each dialogue act is a tuple that consists of intent, domain, slot, and value. We pre-define 6 types of intents and use the update of the user state and system state as well as keyword matching to obtain dialogue acts. For the user side, dialogue acts are mainly derived from the selection of semantic tuples that contain the information of domain, slot, and value. For example, if (1, Attraction, fee, free) in Table TABREF13 is selected by the user, then (Inform, Attraction, fee, free) is labelled. If (1, Attraction, name, ) is selected, then (Request, Attraction, name, none) is labelled. If (2, Hotel, name, near (id=1)) is selected, then (Select, Hotel, src_domain, Attraction) is labelled. This intent is specially designed for the \"nearby\" constraint. For the system side, we mainly applied keyword matching to label dialogue acts. Inform intent is derived by matching the system utterance with the information of selected entities. When the wizard selects multiple retrieved entities and recommend them, Recommend intent is labeled. When the wizard expresses that no result satisfies user constraints, NoOffer is labeled. For General intents such as \"goodbye\", \"thanks\" at both user and system sides, keyword matching is applied.",
"We also obtained a binary label for each semantic tuple in the user state, which indicates whether this semantic tuple has been selected to be expressed by the user. This annotation directly illustrates the progress of the conversation.",
"To evaluate the quality of the annotation of dialogue acts and states (both user and system states), three experts were employed to manually annotate dialogue acts and states for the same 50 dialogues (806 utterances), 10 for each goal type (see Section SECREF4). Since dialogue act annotation is not a classification problem, we didn't use Fleiss' kappa to measure the agreement among experts. We used dialogue act F1 and state accuracy to measure the agreement between each two experts' annotations. The average dialogue act F1 is 94.59% and the average state accuracy is 93.55%. We then compared our annotations with each expert's annotations which are regarded as gold standard. The average dialogue act F1 is 95.36% and the average state accuracy is 94.95%, which indicates the high quality of our annotations."
],
[
"After removing uncompleted dialogues, we collected 6,012 dialogues in total. The dataset is split randomly for training/validation/test, where the statistics are shown in Table TABREF25. The average number of sub-goals in our dataset is 3.24, which is much larger than that in MultiWOZ (1.80) BIBREF12 and Schema (1.84) BIBREF13. The average number of turns (16.9) is also larger than that in MultiWOZ (13.7). These statistics indicate that our dialogue data are more complex.",
"According to the type of user goal, we group the dialogues in the training set into five categories:",
"417 dialogues have only one sub-goal in HAR domains.",
"1573 dialogues have multiple sub-goals (2$\\sim $3) in HAR domains. However, these sub-goals do not have cross-domain informable slots.",
"691 dialogues have multiple sub-goals in HAR domains and at least one sub-goal in the metro or taxi domain (3$\\sim $5 sub-goals). The sub-goals in HAR domains do not have cross-domain informable slots.",
"1,759 dialogues have multiple sub-goals (2$\\sim $5) in HAR domains with cross-domain informable slots.",
"572 dialogues have multiple sub-goals in HAR domains with cross-domain informable slots and at least one sub-goal in the metro or taxi domain (3$\\sim $5 sub-goals).",
"The data statistics are shown in Table TABREF26. As mentioned in Section SECREF14, we generate independent multi-domain, cross multi-domain, and traffic domain sub-goals one by one. Thus in terms of the task complexity, we have S<M<CM and M<M+T<CM+T, which is supported by the average number of sub-goals, semantic tuples, and turns per dialogue in Table TABREF26. The average number of tokens also becomes larger when the goal becomes more complex. About 60% of dialogues (M+T, CM, and CM+T) have cross-domain informable slots. Because of the limit of maximal sub-goals number, the ratio of dialogue number of CM+T to CM is smaller than that of M+T to M.",
"CM and CM+T are much more challenging than other tasks because additional cross-domain constraints in HAR domains are strict and will result in more \"NoOffer\" situations (i.e., the wizard finds no result that satisfies the current constraints). In this situation, the wizard will try to relax some constraints and issue multiple queries to find some results for a recommendation while the user will compromise and change the original goal. The negotiation process is captured by \"NoOffer rate\", \"Multi-query rate\", and \"Goal change rate\" in Table TABREF26. In addition, \"Multi-query rate\" suggests that each sub-goal in M and M+T is as easy to finish as the goal in S.",
"The distribution of dialogue length is shown in Figure FIGREF27, which is an indicator of the task complexity. Most single-domain dialogues terminate within 10 turns. The curves of M and M+T are almost of the same shape, which implies that the traffic task requires two additional turns on average to complete the task. The curves of CM and CM+T are less similar. This is probably because CM goals that have 5 sub-goals (about 22%) can not further generate a sub-goal in traffic domains and become CM+T goals."
],
[
"Our corpus is unique in the following aspects:",
"Complex user goals are designed to favor inter-domain dependency and natural transition between multiple domains. In return, the collected dialogues are more complex and natural for cross-domain dialogue tasks.",
"A well-controlled, synchronous setting is applied to collect human-to-human dialogues. This ensures the high quality of the collected dialogues.",
"Explicit annotations are provided at not only the system side but also the user side. This feature allows us to model user behaviors or develop user simulators more easily."
],
[
"CrossWOZ can be used in different tasks or settings of a task-oriented dialogue system. To facilitate further research, we provided benchmark models for different components of a pipelined task-oriented dialogue system (Figure FIGREF32), including natural language understanding (NLU), dialogue state tracking (DST), dialogue policy learning, and natural language generation (NLG). These models are implemented using ConvLab-2 BIBREF21, an open-source task-oriented dialog system toolkit. We also provided a rule-based user simulator, which can be used to train dialogue policy and generate simulated dialogue data. The benchmark models and simulator will greatly facilitate researchers to compare and evaluate their models on our corpus."
],
[
"Task: The natural language understanding component in a task-oriented dialogue system takes an utterance as input and outputs the corresponding semantic representation, namely, a dialogue act. The task can be divided into two sub-tasks: intent classification that decides the intent type of an utterance, and slot tagging which identifies the value of a slot.",
"Model: We adapted BERTNLU from ConvLab-2. BERT BIBREF22 has shown strong performance in many NLP tasks. We use Chinese pre-trained BERT BIBREF23 for initialization and then fine-tune the parameters on CrossWOZ. We obtain word embeddings and the sentence representation (embedding of [CLS]) from BERT. Since there may exist more than one intent in an utterance, we modify the traditional method accordingly. For dialogue acts of inform and recommend intents such as (intent=Inform, domain=Attraction, slot=fee, value=free) whose values appear in the sentence, we perform sequential labeling using an MLP which takes word embeddings (\"free\") as input and outputs tags in BIO schema (\"B-Inform-Attraction-fee\"). For each of the other dialogue acts (e.g., (intent=Request, domain=Attraction, slot=fee)) that do not have actual values, we use another MLP to perform binary classification on the sentence representation to predict whether the sentence should be labeled with this dialogue act. To incorporate context information, we use the same BERT to get the embedding of last three utterances. We separate the utterances with [SEP] tokens and insert a [CLS] token at the beginning. Then each original input of the two MLP is concatenated with the context embedding (embedding of [CLS]), serving as the new input. We also conducted an ablation test by removing context information. We trained models with both system-side and user-side utterances.",
"Result Analysis: The results of the dialogue act prediction (F1 score) are shown in Table TABREF31. We further tested the performance on different intent types, as shown in Table TABREF35. In general, BERTNLU performs well with context information. The performance on cross multi-domain dialogues (CM and CM+T) drops slightly, which may be due to the decrease of \"General\" intent and the increase of \"NoOffer\" as well as \"Select\" intent in the dialogue data. We also noted that the F1 score of \"Select\" intent is remarkably lower than those of other types, but context information can improve the performance significantly. Since recognizing domain transition is a key factor for a cross-domain dialogue system, natural language understanding models need to utilize context information more effectively."
],
[
"Task: Dialogue state tracking is responsible for recognizing user goals from the dialogue context and then encoding the goals into the pre-defined system state. Traditional state tracking models take as input user dialogue acts parsed by natural language understanding modules, while recently there are joint models obtaining the system state directly from the context.",
"Model: We implemented a rule-based model (RuleDST) and adapted TRADE (Transferable Dialogue State Generator) BIBREF19 in this experiment. RuleDST takes as input the previous system state and the last user dialogue acts. Then, the system state is updated according to hand-crafted rules. For example, If one of user dialogue acts is (intent=Inform, domain=Attraction, slot=fee, value=free), then the value of the \"fee\" slot in the attraction domain will be filled with \"free\". TRADE generates the system state directly from all the previous utterances using a copy mechanism. As mentioned in Section SECREF18, the first query of the system often records full user constraints, while the last one records relaxed constraints for recommendation. Thus the last one involves system policy, which is out of the scope of state tracking. We used the first query for these models and left state tracking with recommendation for future work.",
"Result Analysis: We evaluated the joint state accuracy (percentage of exact matching) of these two models (Table TABREF31). TRADE, the state-of-the-art model on MultiWOZ, performs poorly on our dataset, indicating that more powerful state trackers are necessary. At the test stage, RuleDST can access the previous gold system state and user dialogue acts, which leads to higher joint state accuracy than TRADE. Both models perform worse on cross multi-domain dialogues (CM and CM+T). To evaluate the ability of modeling cross-domain transition, we further calculated joint state accuracy for those turns that receive \"Select\" intent from users (e.g., \"Find a hotel near the attraction\"). The performances are 11.6% and 12.0% for RuleDST and TRADE respectively, showing that they are not able to track domain transition well."
],
[
"Task: Dialogue policy receives state $s$ and outputs system action $a$ at each turn. Compared with the state given by a dialogue state tracker, $s$ may have more information, such as the last user dialogue acts and the entities provided by the backend database.",
"Model: We adapted a vanilla policy trained in a supervised fashion from ConvLab-2 (SL policy). The state $s$ consists of the last system dialogue acts, last user dialogue acts, system state of the current turn, the number of entities that satisfy the constraints in the current domain, and a terminal signal indicating whether the user goal is completed. The action $a$ is delexicalized dialogue acts of current turn which ignores the exact values of the slots, where the values will be filled back after prediction.",
"Result Analysis: As illustrated in Table TABREF31, there is a large gap between F1 score of exact dialogue act and F1 score of delexicalized dialogue act, which means we need a powerful system state tracker to find correct entities. The result also shows that cross multi-domain dialogues (CM and CM+T) are harder for system dialogue act prediction. Additionally, when there is \"Select\" intent in preceding user dialogue acts, the F1 score of exact dialogue act and delexicalized dialogue act are 41.53% and 54.39% respectively. This shows that the policy performs poorly for cross-domain transition."
],
[
"Task: Natural language generation transforms a structured dialogue act into a natural language sentence. It usually takes delexicalized dialogue acts as input and generates a template-style sentence that contains placeholders for slots. Then, the placeholders will be replaced by the exact values, which is called lexicalization.",
"Model: We provided a template-based model (named TemplateNLG) and SC-LSTM (Semantically Conditioned LSTM) BIBREF1 for natural language generation. For TemplateNLG, we extracted templates from the training set and manually added some templates for infrequent dialogue acts. For SC-LSTM we adapted the implementation on MultiWOZ and trained two SC-LSTM with system-side and user-side utterances respectively.",
"Result Analysis: We calculated corpus-level BLEU as used by BIBREF1. We took all utterances with the same delexcalized dialogue acts as references (100 references on average), which results in high BLEU score. For user-side utterances, the BLEU score for TemplateNLG is 0.5780, while the BLEU score for SC-LSTM is 0.7858. For system-side, the two scores are 0.6828 and 0.8595. As exemplified in Table TABREF39, the gap between the two models can be attributed to that SC-LSTM generates common pattern while TemplateNLG retrieves original sentence which has more specific information. We do not provide BLEU scores for different goal types (namely, S, M, CM, etc.) because BLEU scores on different corpus are not comparable."
],
[
"Task: A user simulator imitates the behavior of users, which is useful for dialogue policy learning and automatic evaluation. A user simulator at dialogue act level (e.g., the \"Usr Policy\" in Figure FIGREF32) receives the system dialogue acts and outputs user dialogue acts, while a user simulator at natural language level (e.g., the left part in Figure FIGREF32) directly takes system's utterance as input and outputs user's utterance.",
"Model: We built a rule-based user simulator that works at dialogue act level. Different from agenda-based BIBREF24 user simulator that maintains a stack-like agenda, our simulator maintains the user state straightforwardly (Section SECREF17). The simulator will generate a user goal as described in Section SECREF14. At each user turn, the simulator receives system dialogue acts, modifies its state, and outputs user dialogue acts according to some hand-crafted rules. For example, if the system inform the simulator that the attraction is free, then the simulator will fill the \"fee\" slot in the user state with \"free\", and ask for the next empty slot such as \"address\". The simulator terminates when all requestable slots are filled, and all cross-domain informable slots are filled by real values.",
"Result Analysis: During the evaluation, we initialized the user state of the simulator using the previous gold user state. The input to the simulator is the gold system dialogue acts. We used joint state accuracy (percentage of exact matching) to evaluate user state prediction and F1 score to evaluate the prediction of user dialogue acts. The results are presented in Table TABREF31. We can observe that the performance on complex dialogues (CM and CM+T) is remarkably lower than that on simple ones (S, M, and M+T). This simple rule-based simulator is provided to facilitate dialogue policy learning and automatic evaluation, and our corpus supports the development of more elaborated simulators as we provide the annotation of user-side dialogue states and dialogue acts."
],
[
"In addition to corpus-based evaluation for each module, we also evaluated the performance of a whole dialogue system using the user simulator as described above. Three configurations were explored:",
"Simulation at dialogue act level. As shown by the dashed connections in Figure FIGREF32, we used the aforementioned simulator at the user side and assembled the dialogue system with RuleDST and SL policy.",
"Simulation at natural language level using TemplateNLG. As shown by the solid connections in Figure FIGREF32, the simulator and the dialogue system were equipped with BERTNLU and TemplateNLG additionally.",
"Simulation at natural language level using SC-LSTM. TemplateNLG was replaced with SC-LSTM in the second configuration.",
"When all the slots in a user goal are filled by real values, the simulator terminates. This is regarded as \"task finish\". It's worth noting that \"task finish\" does not mean the task is success, because the system may provide wrong information. We calculated \"task finish rate\" on 1000 times simulations for each goal type (See Table TABREF31). Findings are summarized below:",
"Cross multi-domain tasks (CM and CM+T) are much harder to finish. Comparing M and M+T, although each module performs well in traffic domains, additional sub-goals in these domains are still difficult to accomplish.",
"The system-level performance is largely limited by RuleDST and SL policy. Although the corpus-based performance of NLU and NLG modules is high, the two modules still harm the performance. Thus more powerful models are needed for all components of a pipelined dialogue system.",
"TemplateNLG has a much lower BLEU score but performs better than SC-LSTM in natural language level simulation. This may be attributed to that BERTNLU prefers templates retrieved from the training set."
],
[
"In this paper, we present the first large-scale Chinese Cross-Domain task-oriented dialogue dataset, CrossWOZ. It contains 6K dialogues and 102K utterances for 5 domains, with the annotation of dialogue states and dialogue acts at both user and system sides. About 60% of the dialogues have cross-domain user goals, which encourage natural transition between related domains. Thanks to the rich annotation of dialogue states and dialogue acts at both user side and system side, this corpus provides a new testbed for a wide range of tasks to investigate cross-domain dialogue modeling, such as dialogue state tracking, policy learning, etc. Our experiments show that the cross-domain constraints are challenging for all these tasks. The transition between related domains is especially challenging to model. Besides corpus-based component-wise evaluation, we also performed system-level evaluation with a user simulator, which requires more powerful models for all components of a pipelined cross-domain dialogue system."
],
[
"This work was supported by the National Science Foundation of China (Grant No. 61936010/61876096) and the National Key R&D Program of China (Grant No. 2018YFC0830200). We would like to thank THUNUS NExT JointLab for the support. We would also like to thank Ryuichi Takanobu and Fei Mi for their constructive comments. We are grateful to our action editor, Bonnie Webber, and the anonymous reviewers for their valuable suggestions and feedback."
]
]
} | {
"question": [
"How was the dataset collected?",
"What are the benchmark models?",
"How was the corpus annotated?"
],
"question_id": [
"2376c170c343e2305dac08ba5f5bda47c370357f",
"0137ecebd84a03b224eb5ca51d189283abb5f6d9",
"5f6fbd57cce47f20a0fda27d954543c00c4344c2"
],
"nlp_background": [
"two",
"two",
"two"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. ",
"Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context.",
"Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.",
"Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. "
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. For the taxi domain, there is no need to store the information. Instead, we can call the API directly if necessary.",
"Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context. To make workers understand the task more easily, we crafted templates to generate natural language descriptions for each structured goal.",
"Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.",
"Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances."
],
"highlighted_evidence": [
"Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database.",
"Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context. ",
"Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. ",
"Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. "
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "They crawled travel information from the Web to build a database, created a multi-domain goal generator from the database, collected dialogue between workers an automatically annotated dialogue acts. ",
"evidence": [
"Our corpus is to simulate scenarios where a traveler seeks tourism information and plans her or his travel in Beijing. Domains include hotel, attraction, restaurant, metro, and taxi. The data collection process is summarized as below:",
"Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. For the taxi domain, there is no need to store the information. Instead, we can call the API directly if necessary.",
"Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context. To make workers understand the task more easily, we crafted templates to generate natural language descriptions for each structured goal.",
"Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.",
"Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances."
],
"highlighted_evidence": [
"The data collection process is summarized as below:\n\nDatabase Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. For the taxi domain, there is no need to store the information. Instead, we can call the API directly if necessary.\n\nGoal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context. To make workers understand the task more easily, we crafted templates to generate natural language descriptions for each structured goal.\n\nDialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.\n\nDialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances."
]
}
],
"annotation_id": [
"d1dbe98f982bef1faf43aa1d472c8ed9ffd763fd",
"ff705c27c283670b07e788139cc9e91baa6f328d"
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"BERTNLU from ConvLab-2",
"a rule-based model (RuleDST) ",
"TRADE (Transferable Dialogue State Generator) ",
"a vanilla policy trained in a supervised fashion from ConvLab-2 (SL policy)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Model: We adapted BERTNLU from ConvLab-2. BERT BIBREF22 has shown strong performance in many NLP tasks. We use Chinese pre-trained BERT BIBREF23 for initialization and then fine-tune the parameters on CrossWOZ. We obtain word embeddings and the sentence representation (embedding of [CLS]) from BERT. Since there may exist more than one intent in an utterance, we modify the traditional method accordingly. For dialogue acts of inform and recommend intents such as (intent=Inform, domain=Attraction, slot=fee, value=free) whose values appear in the sentence, we perform sequential labeling using an MLP which takes word embeddings (\"free\") as input and outputs tags in BIO schema (\"B-Inform-Attraction-fee\"). For each of the other dialogue acts (e.g., (intent=Request, domain=Attraction, slot=fee)) that do not have actual values, we use another MLP to perform binary classification on the sentence representation to predict whether the sentence should be labeled with this dialogue act. To incorporate context information, we use the same BERT to get the embedding of last three utterances. We separate the utterances with [SEP] tokens and insert a [CLS] token at the beginning. Then each original input of the two MLP is concatenated with the context embedding (embedding of [CLS]), serving as the new input. We also conducted an ablation test by removing context information. We trained models with both system-side and user-side utterances.",
"Model: We implemented a rule-based model (RuleDST) and adapted TRADE (Transferable Dialogue State Generator) BIBREF19 in this experiment. RuleDST takes as input the previous system state and the last user dialogue acts. Then, the system state is updated according to hand-crafted rules. For example, If one of user dialogue acts is (intent=Inform, domain=Attraction, slot=fee, value=free), then the value of the \"fee\" slot in the attraction domain will be filled with \"free\". TRADE generates the system state directly from all the previous utterances using a copy mechanism. As mentioned in Section SECREF18, the first query of the system often records full user constraints, while the last one records relaxed constraints for recommendation. Thus the last one involves system policy, which is out of the scope of state tracking. We used the first query for these models and left state tracking with recommendation for future work.",
"Model: We adapted a vanilla policy trained in a supervised fashion from ConvLab-2 (SL policy). The state $s$ consists of the last system dialogue acts, last user dialogue acts, system state of the current turn, the number of entities that satisfy the constraints in the current domain, and a terminal signal indicating whether the user goal is completed. The action $a$ is delexicalized dialogue acts of current turn which ignores the exact values of the slots, where the values will be filled back after prediction."
],
"highlighted_evidence": [
"We adapted BERTNLU from ConvLab-2. ",
"We implemented a rule-based model (RuleDST) and adapted TRADE (Transferable Dialogue State Generator) BIBREF19 in this experiment. ",
"We adapted a vanilla policy trained in a supervised fashion from ConvLab-2 (SL policy). "
]
}
],
"annotation_id": [
"e6c3ce2d618ab1a5518ad3fd1b92ffd367c2dba8"
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"The workers were also asked to annotate both user states and system states",
"we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.",
"Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances."
],
"highlighted_evidence": [
"Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.\n\nDialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances."
]
}
],
"annotation_id": [
"a72845ebd9c3ddb40ace7a4fc7120028f693fa5c"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
]
} | {
"caption": [],
"file": []
} |
1910.07181 | BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model Performance | Pretraining deep contextualized representations using an unsupervised language modeling objective has led to large performance gains for a variety of NLP tasks. Despite this success, recent work by Schick and Schutze (2019) suggests that these architectures struggle to understand rare words. For context-independent word embeddings, this problem can be addressed by separately learning representations for infrequent words. In this work, we show that the same idea can also be applied to contextualized models and clearly improves their downstream task performance. Most approaches for inducing word embeddings into existing embedding spaces are based on simple bag-of-words models; hence they are not a suitable counterpart for deep neural network language models. To overcome this problem, we introduce BERTRAM, a powerful architecture based on a pretrained BERT language model and capable of inferring high-quality representations for rare words. In BERTRAM, surface form and contexts of a word directly interact with each other in a deep architecture. Both on a rare word probing task and on three downstream task datasets, BERTRAM considerably improves representations for rare and medium frequency words compared to both a standalone BERT model and previous work. | {
"section_name": [
"Introduction",
"Related Work",
"Model ::: Form-Context Model",
"Model ::: Bertram",
"Model ::: Training",
"Generation of Rare Word Datasets",
"Generation of Rare Word Datasets ::: Dataset Splitting",
"Generation of Rare Word Datasets ::: Baseline Training",
"Generation of Rare Word Datasets ::: Test Set Generation",
"Evaluation ::: Setup",
"Evaluation ::: WNLaMPro",
"Evaluation ::: Downstream Task Datasets",
"Conclusion"
],
"paragraphs": [
[
"As traditional word embedding algorithms BIBREF1 are known to struggle with rare words, several techniques for improving their representations have been proposed over the last few years. These approaches exploit either the contexts in which rare words occur BIBREF2, BIBREF3, BIBREF4, BIBREF5, their surface-form BIBREF6, BIBREF7, BIBREF8, or both BIBREF9, BIBREF10. However, all of these approaches are designed for and evaluated on uncontextualized word embeddings.",
"With the recent shift towards contextualized representations obtained from pretrained deep language models BIBREF11, BIBREF12, BIBREF13, BIBREF14, the question naturally arises whether these approaches are facing the same problem. As all of them already handle rare words implicitly – using methods such as byte-pair encoding BIBREF15 and WordPiece embeddings BIBREF16, or even character-level CNNs BIBREF17 –, it is unclear whether these models even require special treatment of rare words. However, the listed methods only make use of surface-form information, whereas BIBREF9 found that for covering a wide range of rare words, it is crucial to consider both surface-form and contexts.",
"Consistently, BIBREF0 recently showed that for BERT BIBREF13, a popular pretrained language model based on a Transformer architecture BIBREF18, performance on a rare word probing task can significantly be improve by relearning representations of rare words using Attentive Mimicking BIBREF19. However, their proposed model is limited in two important respects:",
"For processing contexts, it uses a simple bag-of-words model, throwing away much of the available information.",
"It combines form and context only in a shallow fashion, thus preventing both input signals from sharing information in any sophisticated manner.",
"Importantly, this limitation applies not only to their model, but to all previous work on obtaining representations for rare words by leveraging form and context. While using bag-of-words models is a reasonable choice for uncontextualized embeddings, which are often themselves based on such models BIBREF1, BIBREF7, it stands to reason that they are suboptimal for contextualized embeddings based on position-aware deep neural architectures.",
"To overcome these limitations, we introduce Bertram (BERT for Attentive Mimicking), a novel architecture for understanding rare words that combines a pretrained BERT language model with Attentive Mimicking BIBREF19. Unlike previous approaches making use of language models BIBREF5, our approach integrates BERT in an end-to-end fashion and directly makes use of its hidden states. By giving Bertram access to both surface form and context information already at its very lowest layer, we allow for a deep connection and exchange of information between both input signals.",
"For various reasons, assessing the effectiveness of methods like Bertram in a contextualized setting poses a huge difficulty: While most previous work on rare words was evaluated on datasets explicitly focusing on such words BIBREF6, BIBREF3, BIBREF4, BIBREF5, BIBREF10, all of these datasets are tailored towards context-independent embeddings and thus not suitable for evaluating our proposed model. Furthermore, understanding rare words is of negligible importance for most commonly used downstream task datasets. To evaluate our proposed model, we therefore introduce a novel procedure that allows us to automatically turn arbitrary text classification datasets into ones where rare words are guaranteed to be important. This is achieved by replacing classification-relevant frequent words with rare synonyms obtained using semantic resources such as WordNet BIBREF20.",
"Using this procedure, we extract rare word datasets from three commonly used text (or text pair) classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. On both the WNLaMPro dataset of BIBREF0 and all three so-obtained datasets, our proposed Bertram model outperforms previous work by a large margin.",
"In summary, our contributions are as follows:",
"We show that a pretrained BERT instance can be integrated into Attentive Mimicking, resulting in much better context representations and a deeper connection of form and context.",
"We design a procedure that allows us to automatically transform text classification datasets into datasets for which rare words are guaranteed to be important.",
"We show that Bertram achieves a new state-of-the-art on the WNLaMPro probing task BIBREF0 and beats all baselines on rare word instances of AG's News, MNLI and DBPedia, resulting in an absolute improvement of up to 24% over a BERT baseline."
],
[
"Incorporating surface-form information (e.g., morphemes, characters or character $n$-grams) is a commonly used technique for improving word representations. For context-independent word embeddings, this information can either be injected into a given embedding space BIBREF6, BIBREF8, or a model can directly be given access to it during training BIBREF7, BIBREF24, BIBREF25. In the area of contextualized representations, many architectures employ subword segmentation methods BIBREF12, BIBREF13, BIBREF26, BIBREF14, whereas others use convolutional neural networks to directly access character-level information BIBREF27, BIBREF11, BIBREF17.",
"Complementary to surface form, another useful source of information for understanding rare words are the contexts in which they occur BIBREF2, BIBREF3, BIBREF4. As recently shown by BIBREF19, BIBREF9, combining form and context leads to significantly better results than using just one of both input signals for a wide range of tasks. While all aforementioned methods are based on simple bag-of-words models, BIBREF5 recently proposed an architecture based on the context2vec language model BIBREF28. However, in contrast to our work, they (i) do not incorporate surface-form information and (ii) do not directly access the hidden states of the language model, but instead simply use its output distribution.",
"There are several datasets explicitly focusing on rare words, e.g. the Stanford Rare Word dataset of BIBREF6, the Definitional Nonce dataset of BIBREF3 and the Contextual Rare Word dataset BIBREF4. However, all of these datasets are only suitable for evaluating context-independent word representations.",
"Our proposed method of generating rare word datasets is loosely related to adversarial example generation methods such as HotFlip BIBREF29, which manipulate the input to change a model's prediction. We use a similar mechanism to determine which words in a given sentence are most important and replace these words with rare synonyms."
],
[
"We review the architecture of the form-context model (FCM) BIBREF9, which forms the basis for our model. Given a set of $d$-dimensional high-quality embeddings for frequent words, FCM can be used to induce embeddings for infrequent words that are appropriate for the given embedding space. This is done as follows: Given a word $w$ and a context $C$ in which it occurs, a surface-form embedding $v_{(w,{C})}^\\text{form} \\in \\mathbb {R}^d$ is obtained similar to BIBREF7 by averaging over embeddings of all $n$-grams in $w$; these $n$-gram embeddings are learned during training. Similarly, a context embedding $v_{(w,{C})}^\\text{context} \\in \\mathbb {R}^d$ is obtained by averaging over the embeddings of all words in $C$. The so-obtained form and context embeddings are then combined using a gate",
"with parameters $w \\in \\mathbb {R}^{2d}, b \\in \\mathbb {R}$ and $\\sigma $ denoting the sigmoid function, allowing the model to decide for each pair $(x,y)$ of form and context embeddings how much attention should be paid to $x$ and $y$, respectively.",
"The final representation of $w$ is then simply a weighted sum of form and context embeddings:",
"where $\\alpha = g(v_{(w,C)}^\\text{form}, v_{(w,C)}^\\text{context})$ and $A$ is a $d\\times d$ matrix that is learned during training.",
"While the context-part of FCM is able to capture the broad topic of numerous rare words, in many cases it is not able to obtain a more concrete and detailed understanding thereof BIBREF9. This is hardly surprising given the model's simplicity; it does, for example, make no use at all of the relative positions of context words. Furthermore, the simple gating mechanism results in only a shallow combination of form and context. That is, the model is not able to combine form and context until the very last step: While it can choose how much to attend to form and context, respectively, the corresponding embeddings do not share any information and thus cannot influence each other in any way."
],
[
"To overcome both limitations described above, we introduce Bertram, an approach that combines a pretrained BERT language model BIBREF13 with Attentive Mimicking BIBREF19. To this end, let $d_h$ be the hidden dimension size and $l_\\text{max}$ be the number of layers for the BERT model being used. We denote with $e_{t}$ the (uncontextualized) embedding assigned to a token $t$ by BERT and, given a sequence of such uncontextualized embeddings $\\mathbf {e} = e_1, \\ldots , e_n$, we denote by $\\textbf {h}_j^l(\\textbf {e})$ the contextualized representation of the $j$-th token at layer $l$ when the model is given $\\mathbf {e}$ as input.",
"Given a word $w$ and a context $C = w_1, \\ldots , w_n$ in which it occurs, let $\\mathbf {t} = t_1, \\ldots , t_{m}$ with $m \\ge n$ be the sequence obtained from $C$ by (i) replacing $w$ with a [MASK] token and (ii) tokenizing the so-obtained sequence to match the BERT vocabulary; furthermore, let $i$ denote the index for which $t_i = \\texttt {[MASK]}$. Perhaps the most simple approach for obtaining a context embedding from $C$ using BERT is to define",
"where $\\mathbf {e} = e_{t_1}, \\ldots , e_{t_m}$. The so-obtained context embedding can then be combined with its form counterpart as described in Eq. DISPLAY_FORM8. While this achieves our first goal of using a more sophisticated context model that can potentially gain a deeper understanding of a word than just its broad topic, the so-obtained architecture still only combines form and context in a shallow fashion. We thus refer to it as the shallow variant of our model and investigate two alternative approaches (replace and add) that work as follows:",
"Replace: Before computing the context embedding, we replace the uncontextualized embedding of the [MASK] token with the word's surface-form embedding:",
"As during BERT pretraining, words chosen for prediction are replaced with [MASK] tokens only 80% of the time and kept unchanged 10% of the time, we hypothesize that even without further training, BERT is able to make use of form embeddings ingested this way.",
"Add: Before computing the context embedding, we prepad the input with the surface-form embedding of $w$, followed by a colon:",
"We also experimented with various other prefixes, but ended up choosing this particular strategy because we empirically found that after masking a token $t$, adding the sequence “$t :$” at the beginning helps BERT the most in recovering this very token at the masked position.",
"tnode/.style=rectangle, inner sep=0.1cm, minimum height=4ex, text centered,text height=1.5ex, text depth=0.25ex, opnode/.style=draw, rectangle, rounded corners, minimum height=4ex, minimum width=4ex, text centered, arrow/.style=draw,->,>=stealth",
"As for both variants, surface-form information is directly and deeply integrated into the computation of the context embedding, we do not require any further gating mechanism and may directly set $v_{(w,C)} = A \\cdot v^\\text{context}_{(w,C)}$.",
"However, we note that for the add variant, the contextualized representation of the [MASK] token is not the only natural candidate to be used for computing the final embedding: We might just as well look at the contextualized representation of the surface-form based embedding added at the very first position. Therefore, we also try a shallow combination of both embeddings. Note, however, that unlike FCM, we combine the contextualized representations – that is, the form part was already influenced by the context part and vice versa before combining them using a gate. For this combination, we define",
"with $A^{\\prime } \\in \\mathbb {R}^{d \\times d_h}$ being an additional learnable parameter. We then combine the two contextualized embeddings similar to Eq. DISPLAY_FORM8 as",
"where $\\alpha = g(h^\\text{form}_{(w,C)}, h^\\text{context}_{(w,C)})$. We refer to this final alternative as the add-gated approach. The model architecture for this variant can be seen in Figure FIGREF14 (left).",
"As in many cases, not just one, but a handful of contexts is known for a rare word, we follow the approach of BIBREF19 to deal with multiple contexts: We add an Attentive Mimicking head on top of our model, as can be seen in Figure FIGREF14 (right). That is, given a set of contexts $\\mathcal {C} = \\lbrace C_1, \\ldots , C_m\\rbrace $ and the corresponding embeddings $v_{(w,C_1)}, \\ldots , v_{(w,C_m)}$, we apply a self-attention mechanism to all embeddings, allowing the model to distinguish informative contexts from uninformative ones. The final embedding $v_{(w, \\mathcal {C})}$ is then a linear combination of the embeddings obtained from each context, where the weight of each embedding is determined based on the self-attention layer. For further details on this mechanism, we refer to BIBREF19."
],
[
"Like previous work, we use mimicking BIBREF8 as a training objective. That is, given a frequent word $w$ with known embedding $e_w$ and a set of corresponding contexts $\\mathcal {C}$, Bertram is trained to minimize $\\Vert e_w - v_{(w, \\mathcal {C})}\\Vert ^2$.",
"As training Bertram end-to-end requires much computation (processing a single training instance $(w,\\mathcal {C})$ is as costly as processing an entire batch of $|\\mathcal {C}|$ examples in the original BERT architecture), we resort to the following three-stage training process:",
"We train only the form part, i.e. our loss for a single example $(w, \\mathcal {C})$ is $\\Vert e_w - v^\\text{form}_{(w, \\mathcal {C})} \\Vert ^2$.",
"We train only the context part, minimizing $\\Vert e_w - A \\cdot v^\\text{context}_{(w, \\mathcal {C})} \\Vert ^2$ where the context embedding is obtained using the shallow variant of Bertram. Furthermore, we exclude all of BERT's parameters from our optimization.",
"We combine the pretrained form-only and context-only model and train all additional parameters.",
"Pretraining the form and context parts individually allows us to train the full model for much fewer steps with comparable results. Importantly, for the first two stages of our training procedure, we do not have to backpropagate through the entire BERT model to obtain all required gradients, drastically increasing the training speed."
],
[
"To measure the quality of rare word representations in a contextualized setting, we would ideally need text classification datasets with the following two properties:",
"A model that has no understanding of rare words at all should perform close to 0%.",
"A model that perfectly understands rare words should be able to classify every instance correctly.",
"Unfortunately, this requirement is not even remotely fulfilled by most commonly used datasets, simply because rare words occur in only a few entries and when they do, they are often of negligible importance.",
"To solve this problem, we devise a procedure to automatically transform existing text classification datasets such that rare words become important. For this procedure, we require a pretrained language model $M$ as a baseline, an arbitrary text classification dataset $\\mathcal {D}$ containing labelled instances $(\\mathbf {x}, y)$ and a substitution dictionary $S$, mapping each word $w$ to a set of rare synonyms $S(w)$. Given these ingredients, our procedure consists of three steps: (i) splitting the dataset into a train set and a set of test candidates, (ii) training the baseline model on the train set and (iii) modifying a subset of the test candidates to generate the final test set."
],
[
"We partition $\\mathcal {D}$ into a train set $\\mathcal {D}_\\text{train}$ and a set of test candidates, $\\mathcal {D}_\\text{cand}$, with the latter containing all instances $(\\mathbf {x},y) \\in \\mathcal {D}$ such that for at least one word $w$ in $\\mathbf {x}$, $S(w) \\ne \\emptyset $. Additionally, we require that the training set consists of at least one third of the entire data."
],
[
"We finetune $M$ on $\\mathcal {D}_\\text{train}$. Let $(\\mathbf {x}, y) \\in \\mathcal {D}_\\text{train}$ where $\\mathbf {x} = w_1, \\ldots , w_n$ is a sequence of words. We deviate from the standard finetuning procedure of BIBREF13 in three respects:",
"We randomly replace 5% of all words in $\\mathbf {x}$ with a [MASK] token. This allows the model to cope with missing or unknown words, a prerequisite for our final test set generation.",
"As an alternative to overwriting the language model's uncontextualized embeddings for rare words, we also want to allow models to simply add an alternative representation during test time, in which case we simply separate both representations by a slash. To accustom the language model to this duplication of words, we replace each word $w_i$ with “$w_i$ / $w_i$” with a probability of 10%. To make sure that the model does not simply learn to always focus on the first instance during training, we randomly mask each of the two repetitions with probability 25%.",
"We do not finetune the model's embedding layer. In preliminary experiments, we found this not to hurt performance."
],
[
"Let $p(y \\mid \\mathbf {x})$ be the probability that the finetuned model $M$ assigns to class $y$ given input $\\mathbf {x}$, and let",
"be the model's prediction for input $\\mathbf {x}$ where $\\mathcal {Y}$ denotes the set of all labels. For generating our test set, we only consider candidates that are classified correctly by the baseline model, i.e. candidates $(\\mathbf {x}, y) \\in \\mathcal {D}_\\text{cand}$ with $M(\\mathbf {x}) = y$. For each such entry, let $\\mathbf {x} = w_1, \\ldots , w_n$ and let $\\mathbf {x}_{w_i = t}$ be the sequence obtained from $\\mathbf {x}$ by replacing $w_i$ with $t$. We compute",
"i.e., we select the word $w_i$ whose masking pushes the model's prediction the furthest away from the correct label. If removing this word already changes the model's prediction – that is, $M(\\mathbf {x}_{w_i = \\texttt {[MASK]}}) \\ne y$ –, we select a random rare synonym $\\hat{w}_i \\in S(w_i)$ and add $(\\mathbf {x}_{w_i = \\hat{w}_i}, y)$ to the test set. Otherwise, we repeat the above procedure; if the label still has not changed after masking up to 5 words, we discard the corresponding entry. All so-obtained test set entries $(\\mathbf {x}_{w_{i_1} = \\hat{w}_{i_1}, \\ldots , w_{i_k} = \\hat{w}_{i_k} }, y)$ have the following properties:",
"If each $w_{i_j}$ is replaced by a [MASK] token, the entry is classified incorrectly by $M$. In other words, understanding the words $w_{i_j}$ is essential for $M$ to determine the correct label.",
"If the model's internal representation of each $\\hat{w}_{i_j}$ is equal to its representation of $w_{i_j}$, the entry is classified correctly by $M$. That is, if the model is able to understand the rare words $\\hat{w}_{i_j}$ and to identify them as synonyms of ${w_{i_j}}$, it predicts the correct label for each instance.",
"It is important to notice that the so-obtained test set is very closely coupled to the baseline model $M$, because we selected the words to replace based on the model's predictions. Importantly, however, the model is never queried with any rare synonym during test set generation, so its representations of rare words are not taken into account for creating the test set. Thus, while the test set is not suitable for comparing $M$ with an entirely different model $M^{\\prime }$, it allows us to compare various strategies for representing rare words in the embedding space of $M$. A similar constraint can be found in the Definitional Nonce dataset BIBREF3, which is tied to a given embedding space based on Word2Vec BIBREF1."
],
[
"For our evaluation of Bertram, we largely follow the experimental setup of BIBREF0. Our implementation of Bertram is based on PyTorch BIBREF30 and the Transformers library of BIBREF31. Throughout all of our experiments, we use BERT$_\\text{base}$ as the underlying language model for Bertram. To obtain embeddings for frequent multi-token words during training, we use one-token approximation BIBREF0. Somewhat surprisingly, we found in preliminary experiments that excluding BERT's parameters from the finetuning procedure outlined in Section SECREF17 improves performance while speeding up training; we thus exclude them in the third step of our training procedure.",
"While BERT was trained on BooksCorpus BIBREF32 and a large Wikipedia dump, we follow previous work and train Bertram on only the much smaller Westbury Wikipedia Corpus (WWC) BIBREF33; this of course gives BERT a clear advantage over our proposed method. In order to at least partially compensate for this, in our downstream task experiments we gather the set of contexts $\\mathcal {C}$ for a given rare word from both the WWC and BooksCorpus during inference."
],
[
"We evalute Bertram on the WNLaMPro dataset of BIBREF0. This dataset consists of cloze-style phrases like",
"and the task is to correctly fill the slot (____) with one of several acceptable target words (e.g., “fruit”, “bush” and “berry”), which requires knowledge of the phrase's keyword (“lingonberry” in the above example). As the goal of this dataset is to probe a language model's ability to understand rare words without any task-specific finetuning, BIBREF0 do not provide a training set. Furthermore, the dataset is partitioned into three subsets; this partition is based on the frequency of the keyword, with keywords occurring less than 10 times in the WWC forming the rare subset, those occurring between 10 and 100 times forming the medium subset, and all remaining words forming the frequent subset. As our focus is on improving representations for rare words, we evaluate our model only on the former two sets.",
"Results on WNLaMPro rare and medium are shown in Table TABREF34, where the mean reciprocal rank (MRR) is reported for BERT, Attentive Mimicking and Bertram. As can be seen, supplementing BERT with any of the proposed relearning methods results in noticeable improvements for the rare subset, with add clearly outperforming replace. Moreover, the add and add-gated variants of Bertram perform surprisingly well for more frequent words, improving the score for WNLaMPro-medium by 50% compared to BERT$_\\text{base}$ and 31% compared to Attentive Mimicking. This makes sense considering that compared to Attentive Mimicking, the key enhancement of Bertram lies in improving context representations and interconnection of form and context; naturally, the more contexts are given, the more this comes into play. Noticeably, despite being both based on and integrated into a BERT$_\\text{base}$ model, our architecture even outperforms a standalone BERT$_\\text{large}$ model by a large margin."
],
[
"To measure the effect of adding Bertram to BERT on downstream tasks, we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. For all three datasets, we use BERT$_\\text{base}$ as a baseline model and create the substitution dictionary $S$ using the synonym relation of WordNet BIBREF20 and the pattern library BIBREF34 to make sure that all synonyms have consistent parts of speech. As an additional source of word substitutions, we make use of the misspellings dataset of BIBREF25, which is based on query logs of a search engine. To prevent misspellings from dominating the resulting dataset, we only assign misspelling-based substitutes to randomly selected 10% of the words contained in each sentence. Motivated by the results on WNLaMPro-medium, we consider every word that occurs less than 100 times in the WWC and our BooksCorpus replica combined as being rare. Some examples of entries in the resulting datasets can be seen in Table TABREF35.",
"Just like for WNLaMPro, our default way of injecting Bertram embeddings into the baseline model is to replace the sequence of uncontextualized WordPiece tokens for a given rare word with its Bertram-based embedding. That is, given a sequence of uncontextualized token embeddings $\\mathbf {e} = e_1, \\ldots , e_n$ where $e_{i}, \\ldots , e_{i+j}$ with $1 \\le i \\le i+j \\le n$ is the sequence of WordPiece embeddings for a single rare word $w$, we replace $\\mathbf {e}$ with",
"By default, the set of contexts $\\mathcal {C}$ required for this replacement is obtained by collecting all sentences from the WWC and BooksCorpus in which $w$ occurs. As our model architecture allows us to easily include new contexts without requiring any additional training, we also try a variant where we add in-domain contexts by giving the model access to the texts found in the test set.",
"In addition to the procedure described above, we also try a variant where instead of replacing the original WordPiece embeddings for a given rare word, we merely add the Bertram-based embedding, separating both representations using a single slash:",
"As it performs best on the rare and medium subsets of WNLaMPro combined, we use only the add-gated variant of Bertram for all datasets. Results can be seen in Table TABREF37, where for each task, we report the accuracy on the entire dataset as well as scores obtained considering only instances where at least one word was replaced by a misspelling or a WordNet synonym, respectively. Consistent with results on WNLaMPro, combining BERT with Bertram outperforms both a standalone BERT model and one combined with Attentive Mimicking across all tasks. While keeping the original BERT embeddings in addition to Bertram's representation brings no benefit, adding in-domain data clearly helps for two out of three datasets. This makes sense as for rare words, every single additional context can be crucial for gaining a deeper understanding.",
"To further understand for which words using Bertram is helpful, in Figure FIGREF39 we look at the accuracy of BERT both with and without Bertram on all three tasks as a function of word frequency. That is, we compute the accuracy scores for both models when considering only entries $(\\mathbf {x}_{w_{i_1} = \\hat{w}_{i_1}, \\ldots , w_{i_k} = \\hat{w}_{i_k} }, y)$ where each substituted word $\\hat{w}_{i_j}$ occurs less than $c_\\text{max}$ times in WWC and BooksCorpus, for various values of $c_\\text{max}$. As one would expect, $c_\\text{max}$ is positively correlated with the accuracies of both models, showing that the rarer a word is, the harder it is to understand. Perhaps more interestingly, for all three datasets the gap between Bertram and BERT remains more or less constant regardless of $c_\\text{max}$. This indicates that using Bertram might also be useful for even more frequent words than the ones considered."
],
[
"We have introduced Bertram, a novel architecture for relearning high-quality representations of rare words. This is achieved by employing a powerful pretrained language model and deeply connecting surface-form and context information. By replacing important words with rare synonyms, we have created various downstream task datasets focusing on rare words; on all of these datasets, Bertram improves over a BERT model without special handling of rare words, demonstrating the usefulness of our proposed method.",
"As our analysis has shown that even for the most frequent words considered, using Bertram is still beneficial, future work might further investigate the limits of our proposed method. Furthermore, it would be interesting to explore more complex ways of incorporating surface-form information – e.g., by using a character-level CNN similar to the one of BIBREF27 – to balance out the potency of Bertram's form and context parts."
]
]
} | {
"question": [
"What models other than standalone BERT is new model compared to?",
"How much is representaton improved for rare/medum frequency words compared to standalone BERT and previous work?",
"What are three downstream task datasets?",
"What is dataset for word probing task?"
],
"question_id": [
"d6e2b276390bdc957dfa7e878de80cee1f41fbca",
"32537fdf0d4f76f641086944b413b2f756097e5e",
"ef081d78be17ef2af792e7e919d15a235b8d7275",
"537b2d7799124d633892a1ef1a485b3b071b303d"
],
"nlp_background": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Only Bert base and Bert large are compared to proposed approach.",
"evidence": [
"Results on WNLaMPro rare and medium are shown in Table TABREF34, where the mean reciprocal rank (MRR) is reported for BERT, Attentive Mimicking and Bertram. As can be seen, supplementing BERT with any of the proposed relearning methods results in noticeable improvements for the rare subset, with add clearly outperforming replace. Moreover, the add and add-gated variants of Bertram perform surprisingly well for more frequent words, improving the score for WNLaMPro-medium by 50% compared to BERT$_\\text{base}$ and 31% compared to Attentive Mimicking. This makes sense considering that compared to Attentive Mimicking, the key enhancement of Bertram lies in improving context representations and interconnection of form and context; naturally, the more contexts are given, the more this comes into play. Noticeably, despite being both based on and integrated into a BERT$_\\text{base}$ model, our architecture even outperforms a standalone BERT$_\\text{large}$ model by a large margin."
],
"highlighted_evidence": [
"Noticeably, despite being both based on and integrated into a BERT$_\\text{base}$ model, our architecture even outperforms a standalone BERT$_\\text{large}$ model by a large margin."
]
}
],
"annotation_id": [
"d01e0f2398f8229187e2e368b2b09229b352b9a7"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"improving the score for WNLaMPro-medium by 50% compared to BERT$_\\text{base}$ and 31% compared to Attentive Mimicking"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Results on WNLaMPro rare and medium are shown in Table TABREF34, where the mean reciprocal rank (MRR) is reported for BERT, Attentive Mimicking and Bertram. As can be seen, supplementing BERT with any of the proposed relearning methods results in noticeable improvements for the rare subset, with add clearly outperforming replace. Moreover, the add and add-gated variants of Bertram perform surprisingly well for more frequent words, improving the score for WNLaMPro-medium by 50% compared to BERT$_\\text{base}$ and 31% compared to Attentive Mimicking. This makes sense considering that compared to Attentive Mimicking, the key enhancement of Bertram lies in improving context representations and interconnection of form and context; naturally, the more contexts are given, the more this comes into play. Noticeably, despite being both based on and integrated into a BERT$_\\text{base}$ model, our architecture even outperforms a standalone BERT$_\\text{large}$ model by a large margin."
],
"highlighted_evidence": [
"Moreover, the add and add-gated variants of Bertram perform surprisingly well for more frequent words, improving the score for WNLaMPro-medium by 50% compared to BERT$_\\text{base}$ and 31% compared to Attentive Mimicking."
]
}
],
"annotation_id": [
"b4a55e4cc1e42a71095f3c6e06272669f6706228"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"MNLI BIBREF21",
"AG's News BIBREF22",
"DBPedia BIBREF23"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"To measure the effect of adding Bertram to BERT on downstream tasks, we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. For all three datasets, we use BERT$_\\text{base}$ as a baseline model and create the substitution dictionary $S$ using the synonym relation of WordNet BIBREF20 and the pattern library BIBREF34 to make sure that all synonyms have consistent parts of speech. As an additional source of word substitutions, we make use of the misspellings dataset of BIBREF25, which is based on query logs of a search engine. To prevent misspellings from dominating the resulting dataset, we only assign misspelling-based substitutes to randomly selected 10% of the words contained in each sentence. Motivated by the results on WNLaMPro-medium, we consider every word that occurs less than 100 times in the WWC and our BooksCorpus replica combined as being rare. Some examples of entries in the resulting datasets can be seen in Table TABREF35."
],
"highlighted_evidence": [
"To measure the effect of adding Bertram to BERT on downstream tasks, we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. "
]
},
{
"unanswerable": false,
"extractive_spans": [
"MNLI",
"AG's News",
"DBPedia"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"To measure the effect of adding Bertram to BERT on downstream tasks, we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. For all three datasets, we use BERT$_\\text{base}$ as a baseline model and create the substitution dictionary $S$ using the synonym relation of WordNet BIBREF20 and the pattern library BIBREF34 to make sure that all synonyms have consistent parts of speech. As an additional source of word substitutions, we make use of the misspellings dataset of BIBREF25, which is based on query logs of a search engine. To prevent misspellings from dominating the resulting dataset, we only assign misspelling-based substitutes to randomly selected 10% of the words contained in each sentence. Motivated by the results on WNLaMPro-medium, we consider every word that occurs less than 100 times in the WWC and our BooksCorpus replica combined as being rare. Some examples of entries in the resulting datasets can be seen in Table TABREF35."
],
"highlighted_evidence": [
"To measure the effect of adding Bertram to BERT on downstream tasks, we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23."
]
}
],
"annotation_id": [
"709376b155cf4c245d587fd6177d3ce8b4e23a32",
"a7eec1f4a5f97265f08cfd09b1cec20b97c573f6"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"WNLaMPro dataset"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We evalute Bertram on the WNLaMPro dataset of BIBREF0. This dataset consists of cloze-style phrases like"
],
"highlighted_evidence": [
"We evalute Bertram on the WNLaMPro dataset of BIBREF0."
]
}
],
"annotation_id": [
"0055f1c704b2380b3f9692330601906890b9b49d"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Figure 1: Schematic representation of BERTRAM in the add-gated configuration processing the input word w = “washables” given a single context C1 = “other washables such as trousers . . .” (left) and given multiple contexts C = {C1, . . . , Cm} (right)",
"Table 1: Results on WNLaMPro test for baseline models and all BERTRAM variants",
"Table 2: Exemplary entries from the datasets obtained through our procedure. Replaced words from the original datasets are shown crossed out, their rare replacements are in bold.",
"Table 3: Results for BERT, Attentive Mimicking and BERTRAM on rare word datasets generated from AG’s News, MNLI and DBPedia. For each dataset, accuracy for all training instances as well as for those instances containing at least one misspelling (Msp) and those containing at least one rare WordNet synonym (WN) is shown.",
"Figure 2: Comparison of BERT and BERTRAM on three downstream tasks for varying maximum numbers of contexts cmax"
],
"file": [
"5-Figure1-1.png",
"7-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"9-Figure2-1.png"
]
} |
1902.00330 | Joint Entity Linking with Deep Reinforcement Learning | Entity linking is the task of aligning mentions to corresponding entities in a given knowledge base. Previous studies have highlighted the necessity for entity linking systems to capture the global coherence. However, there are two common weaknesses in previous global models. First, most of them calculate the pairwise scores between all candidate entities and select the most relevant group of entities as the final result. In this process, the consistency among wrong entities as well as that among right ones are involved, which may introduce noise data and increase the model complexity. Second, the cues of previously disambiguated entities, which could contribute to the disambiguation of the subsequent mentions, are usually ignored by previous models. To address these problems, we convert the global linking into a sequence decision problem and propose a reinforcement learning model which makes decisions from a global perspective. Our model makes full use of the previous referred entities and explores the long-term influence of current selection on subsequent decisions. We conduct experiments on different types of datasets, the results show that our model outperforms state-of-the-art systems and has better generalization performance. | {
"section_name": [
"Introduction",
"Methodology",
"Preliminaries",
"Local Encoder",
"Global Encoder",
"Entity Selector",
"Experiment",
"Experiment Setup",
"Comparing with Previous Work",
"Discussion on different RLEL variants",
"Case Study",
"Related Work",
"Entity Linking",
"Reinforcement Learning",
"Conclusions"
],
"paragraphs": [
[
"Entity Linking (EL), which is also called Entity Disambiguation (ED), is the task of mapping mentions in text to corresponding entities in a given knowledge Base (KB). This task is an important and challenging stage in text understanding because mentions are usually ambiguous, i.e., different named entities may share the same surface form and the same entity may have multiple aliases. EL is key for information retrieval (IE) and has many applications, such as knowledge base population (KBP), question answering (QA), etc.",
"Existing EL methods can be divided into two categories: local model and global model. Local models concern mainly on contextual words surrounding the mentions, where mentions are disambiguated independently. These methods are not work well when the context information is not rich enough. Global models take into account the topical coherence among the referred entities within the same document, where mentions are disambiguated jointly. Most of previous global models BIBREF0 , BIBREF1 , BIBREF2 calculate the pairwise scores between all candidate entities and select the most relevant group of entities. However, the consistency among wrong entities as well as that among right ones are involved, which not only increases the model complexity but also introduces some noises. For example, in Figure 1, there are three mentions \"France\", \"Croatia\" and \"2018 World Cup\", and each mention has three candidate entities. Here, \"France\" may refer to French Republic, France national basketball team or France national football team in KB. It is difficult to disambiguate using local models, due to the scarce common information in the contextual words of \"France\" and the descriptions of its candidate entities. Besides, the topical coherence among the wrong entities related to basketball team (linked by an orange dashed line) may make the global models mistakenly refer \"France\" to France national basketball team. So, how to solve these problems?",
"We note that, mentions in text usually have different disambiguation difficulty according to the quality of contextual information and the topical coherence. Intuitively, if we start with mentions that are easier to disambiguate and gain correct results, it will be effective to utilize information provided by previously referred entities to disambiguate subsequent mentions. In the above example, it is much easier to map \"2018 World Cup\" to 2018 FIFA World Cup based on their common contextual words \"France\", \"Croatia\", \"4-2\". Then, it is obvious that \"France\" and \"Croatia\" should be referred to the national football team because football-related terms are mentioned many times in the description of 2018 FIFA World Cup.",
"Inspired by this intuition, we design the solution with three principles: (i) utilizing local features to rank the mentions in text and deal with them in a sequence manner; (ii) utilizing the information of previously referred entities for the subsequent entity disambiguation; (iii) making decisions from a global perspective to avoid the error propagation if the previous decision is wrong.",
"In order to achieve these aims, we consider global EL as a sequence decision problem and proposed a deep reinforcement learning (RL) based model, RLEL for short, which consists of three modules: Local Encoder, Global Encoder and Entity Selector. For each mention and its candidate entities, Local Encoder encodes the local features to obtain their latent vector representations. Then, the mentions are ranked according to their disambiguation difficulty, which is measured by the learned vector representations. In order to enforce global coherence between mentions, Global Encoder encodes the local representations of mention-entity pairs in a sequential manner via a LSTM network, which maintains a long-term memory on features of entities which has been selected in previous states. Entity Selector uses a policy network to choose the target entities from the candidate set. For a single disambiguation decision, the policy network not only considers the pairs of current mention-entity representations, but also concerns the features of referred entities in the previous states which is pursued by the Global Encoder. In this way, Entity Selector is able to take actions based on the current state and previous ones. When eliminating the ambiguity of all mentions in the sequence, delayed rewards are used to adjust its policy in order to gain an optimized global decision.",
"Deep RL model, which learns to directly optimize the overall evaluation metrics, works much better than models which learn with loss functions that just evaluate a particular single decision. By this property, RL has been successfully used in many NLP tasks, such as information retrieval BIBREF3 , dialogue system BIBREF4 and relation classification BIBREF5 , etc. To the best of our knowledge, we are the first to design a RL model for global entity linking. And in this paper, our RL model is able to produce more accurate results by exploring the long-term influence of independent decisions and encoding the entities disambiguated in previous states.",
"In summary, the main contributions of our paper mainly include following aspects:"
],
[
"The overall structure of our RLEL model is shown in Figure 2. The proposed framework mainly includes three parts: Local Encoder which encodes local features of mentions and their candidate entities, Global Encoder which encodes the global coherence of mentions in a sequence manner and Entity Selector which selects an entity from the candidate set. As the Entity Selector and the Global Encoder are correlated mutually, we train them jointly. Moreover, the Local Encoder as the basis of the entire framework will be independently trained before the joint training process starts. In the following, we will introduce the technical details of these modules."
],
[
"Before introducing our model, we firstly define the entity linking task. Formally, given a document $D$ with a set of mentions $M = \\lbrace m_1, m_2,...,m_k\\rbrace $ , each mention $ m_t \\in D$ has a set of candidate entities $C_{m_t} = \\lbrace e_{t}^1, e_{t}^2,..., e_{t}^n\\rbrace $ . The task of entity linking is to map each mention $m_t$ to its corresponding correct target entity $e_{t}^+$ or return \"NIL\" if there is not correct target entity in the knowledge base. Before selecting the target entity, we need to generate a certain number of candidate entities for model selection.",
"Inspired by the previous works BIBREF6 , BIBREF7 , BIBREF8 , we use the mention's redirect and disambiguation pages in Wikipedia to generate candidate sets. For those mentions without corresponding disambiguation pages, we use its n-grams to retrieve the candidates BIBREF8 . In most cases, the disambiguation page contains many entities, sometimes even hundreds. To optimize the model's memory and avoid unnecessary calculations, the candidate sets need to be filtered BIBREF9 , BIBREF0 , BIBREF1 . Here we utilize the XGBoost model BIBREF10 as an entity ranker to reduce the size of candidate set. The features used in XGBoost can be divided into two aspects, the one is string similarity like the Jaro-Winkler distance between the entity title and the mention, the other is semantic similarity like the cosine distance between the mention context representation and the entity embedding. Furthermore, we also use the statistical features based on the pageview and hyperlinks in Wikipedia. Empirically, we get the pageview of the entity from the Wikipedia Tool Labs which counts the number of visits on each entity page in Wikipedia. After ranking the candidate sets based on the above features, we take the top k scored entities as final candidate set for each mention."
],
[
"Given a mention $m_t$ and the corresponding candidate set $\\lbrace e_t^1, e_t^2,..., \\\\ e_t^k\\rbrace $ , we aim to get their local representation based on the mention context and the candidate entity description. For each mention, we firstly select its $n$ surrounding words, and represent them as word embedding using a pre-trained lookup table BIBREF11 . Then, we use Long Short-Term Memory (LSTM) networks to encode the contextual word sequence $\\lbrace w_c^1, w_c^2,..., w_c^n\\rbrace $ as a fixed-size vector $V_{m_t}$ . The description of entity is encoded as $D_{e_t^i}$ in the same way. Apart from the description of entity, there are many other valuable information in the knowledge base. To make full use of these information, many researchers trained entity embeddings by combining the description, category, and relationship of entities. As shown in BIBREF0 , entity embeddings compress the semantic meaning of entities and drastically reduce the need for manually designed features or co-occurrence statistics. Therefore, we use the pre-trained entity embedding $E_{e_t^i}$ and concatenate it with the description vector $D_{e_t^i}$ to enrich the entity representation. The concatenation result is denoted by $V_{e_t^i}$ .",
"After getting $V_{e_t^i}$ , we concatenate it with $V_{m_t}$ and then pass the concatenation result to a multilayer perceptron (MLP). The MLP outputs a scalar to represent the local similarity between the mention $m_t$ and the candidate entity $e_t^i$ . The local similarity is calculated by the following equations: ",
"$$\\Psi (m_t, e_t^i) = MLP(V_{m_t}\\oplus {V_{e_t^i}})$$ (Eq. 9) ",
"Where $\\oplus $ indicates vector concatenation. With the purpose of distinguishing the correct target entity and wrong candidate entities when training the local encoder model, we utilize a hinge loss that ranks ground truth higher than others. The rank loss function is defined as follows: ",
"$$L_{local} = max(0, \\gamma -\\Psi (m_t, e_t^+)+\\Psi (m_t, e_t^-))$$ (Eq. 10) ",
"When optimizing the objective function, we minimize the rank loss similar to BIBREF0 , BIBREF1 . In this ranking model, a training instance is constructed by pairing a positive target entity $e_t^+$ with a negative entity $e_t^-$ . Where $\\gamma > 0$ is a margin parameter and our purpose is to make the score of the positive target entity $e_t^+$ is at least a margin $\\gamma $ higher than that of negative candidate entity $e_t^-$ .",
"With the local encoder, we obtain the representation of mention context and candidate entities, which will be used as the input into the global encoder and entity selector. In addition, the similarity scores calculated by MLP will be utilized for ranking mentions in the global encoder."
],
[
"In the global encoder module, we aim to enforce the topical coherence among the mentions and their target entities. So, we use an LSTM network which is capable of maintaining the long-term memory to encode the ranked mention sequence. What we need to emphasize is that our global encoder just encode the mentions that have been disambiguated by the entity selector which is denoted as $V_{a_t}$ .",
"As mentioned above, the mentions should be sorted according to their contextual information and topical coherence. So, we firstly divide the adjacent mentions into a segment by the order they appear in the document based on the observation that the topical consistency attenuates along with the distance between the mentions. Then, we sort mentions in a segment based on the local similarity and place the mention that has a higher similarity value in the front of the sequence. In Equation 1, we define the local similarity of $m_i$ and its corresponding candidate entity $e_t^i$ . On this basis, we define $\\Psi _{max}(m_i, e_i^a)$ as the the maximum local similarity between the $m_i$ and its candidate set $C_{m_i} = \\lbrace e_i^1, e_i^2,..., e_i^n\\rbrace $ . We use $\\Psi _{max}(m_i, e_i^a)$ as criterion when sorting mentions. For instance, if $\\Psi _{max}(m_i, e_i^a) > \\Psi _{max}(m_j, e_j^b)$ then we place $m_i$ before $m_j$ . Under this circumstances, the mentions in the front positions may not be able to make better use of global consistency, but their target entities have a high degree of similarity to the context words, which allows them to be disambiguated without relying on additional information. In the end, previous selected target entity information is encoded by global encoder and the encoding result will be served as input to the entity selector.",
"Before using entity selector to choose target entities, we pre-trained the global LSTM network. During the training process, we input not only positive samples but also negative ones to the LSTM. By doing this, we can enhance the robustness of the network. In the global encoder module, we adopt the following cross entropy loss function to train the model. ",
"$$L_{global} = -\\frac{1}{n}\\sum _x{\\left[y\\ln {y^{^{\\prime }}} + (1-y)\\ln (1-y^{^{\\prime }})\\right]}$$ (Eq. 12) ",
"Where $y\\in \\lbrace 0,1\\rbrace $ represents the label of the candidate entity. If the candidate entity is correct $y=1$ , otherwise $y=0$ . $y^{^{\\prime }}\\in (0,1)$ indicates the output of our model. After pre-training the global encoder, we start using the entity selector to choose the target entity for each mention and encode these selections."
],
[
"In the entity selector module, we choose the target entity from candidate set based on the results of local and global encoder. In the process of sequence disambiguation, each selection result will have an impact on subsequent decisions. Therefore, we transform the choice of the target entity into a reinforcement learning problem and view the entity selector as an agent. In particular, the agent is designed as a policy network which can learn a stochastic policy and prevents the agent from getting stuck at an intermediate state BIBREF12 . Under the guidance of policy, the agent can decide which action (choosing the target entity from the candidate set)should be taken at each state, and receive a delay reward when all the selections are made. In the following part, we first describe the state, action and reward. Then, we detail how to select target entity via a policy network.",
"The result of entity selection is based on the current state information. For time $t$ , the state vector $S_t$ is generated as follows: ",
"$$S_t = V_{m_i}^t\\oplus {V_{e_i}^t}\\oplus {V_{feature}^t}\\oplus {V_{e^*}^{t-1}}$$ (Eq. 15) ",
"Where $\\oplus $ indicates vector concatenation. The $V_{m_i}^t$ and $V_{e_i}^t$ respectively denote the vector of $m_i$ and $e_i$ at time $t$ . For each mention, there are multiple candidate entities correspond to it. With the purpose of comparing the semantic relevance between the mention and each candidate entity at the same time, we copy multiple copies of the mention vector. Formally, we extend $V_{m_i}^t \\in \\mathbb {R}^{1\\times {n}}$ to $V_{m_i}^t{^{\\prime }} \\in \\mathbb {R}^{k\\times {n}}$ and then combine it with $V_{e_i}^t \\in \\mathbb {R}^{k\\times {n}}$ . Since $V_{m_i}^t$ and $V_{m_i}^t$0 are mainly to represent semantic information, we add feature vector $V_{m_i}^t$1 to enrich lexical and statistical features. These features mainly include the popularity of the entity, the edit distance between the entity description and the mention context, the number of identical words in the entity description and the mention context etc. After getting these feature values, we combine them into a vector and add it to the current state. In addition, the global vector $V_{m_i}^t$2 is also added to $V_{m_i}^t$3 . As mentioned in global encoder module, $V_{m_i}^t$4 is the output of global LSTM network at time $V_{m_i}^t$5 , which encodes the mention context and target entity information from $V_{m_i}^t$6 to $V_{m_i}^t$7 . Thus, the state $V_{m_i}^t$8 contains current information and previous decisions, while also covering the semantic representations and a variety of statistical features. Next, the concatenated vector will be fed into the policy network to generate action.",
"According to the status at each time step, we take corresponding action. Specifically, we define the action at time step $t$ is to select the target entity $e_t^*$ for $m_t$ . The size of action space is the number of candidate entities for each mention, where $a_i \\in \\lbrace 0,1,2...k\\rbrace $ indicates the position of the selected entity in the candidate entity list. Clearly, each action is a direct indicator of target entity selection in our model. After completing all the actions in the sequence we will get a delayed reward.",
"The agent takes the reward value as the feedback of its action and learns the policy based on it. Since current selection result has a long-term impact on subsequent decisions, we don't give an immediate reward when taking an action. Instead, a delay reward is given by follows, which can reflect whether the action improves the overall performance or not. ",
"$$R(a_t) = p(a_t)\\sum _{j=t}^{T}p(a_j) + (1 - p(a_t))(\\sum _{j=t}^{T}p(a_j) + t - T)$$ (Eq. 16) ",
"where $p(a_t)\\in \\lbrace 0,1\\rbrace $ indicates whether the current action is correct or not. When the action is correct $p(a_t)=1$ otherwise $p(a_t)=0$ . Hence $\\sum _{j=t}^{T}p(a_j)$ and $\\sum _{j=t}^{T}p(a_j) + t - T$ respectively represent the number of correct and wrong actions from time t to the end of episode. Based on the above definition, our delayed reward can be used to guide the learning of the policy for entity linking.",
"After defining the state, action, and reward, our main challenge becomes to choose an action from the action space. To solve this problem, we sample the value of each action by a policy network $\\pi _{\\Theta }(a|s)$ . The structure of the policy network is shown in Figure 3. The input of the network is the current state, including the mention context representation, candidate entity representation, feature representation, and encoding of the previous decisions. We concatenate these representations and fed them into a multilayer perceptron, for each hidden layer, we generate the output by: ",
"$$h_i(S_t) = Relu(W_i*h_{i-1}(S_t) + b_i)$$ (Eq. 17) ",
"Where $W_i$ and $ b_i$ are the parameters of the $i$ th hidden layer, through the $relu$ activation function we get the $h_i(S_t)$ . After getting the output of the last hidden layer, we feed it into a softmax layer which generates the probability distribution of actions. The probability distribution is generated as follows: ",
"$$\\pi (a|s) = Softmax(W * h_l(S) + b)$$ (Eq. 18) ",
"Where the $W$ and $b$ are the parameters of the softmax layer. For each mention in the sequence, we will take action to select the target entity from its candidate set. After completing all decisions in the episode, each action will get an expected reward and our goal is to maximize the expected total rewards. Formally, the objective function is defined as: ",
"$$\\begin{split}\nJ(\\Theta ) &= \\mathbb {E}_{(s_t, a_t){\\sim }P_\\Theta {(s_t, a_t)}}R(s_1{a_1}...s_L{a_L}) \\\\\n&=\\sum _{t}\\sum _{a}\\pi _{\\Theta }(a|s)R(a_t)\n\\end{split}$$ (Eq. 19) ",
"Where $P_\\Theta {(s_t, a_t)}$ is the state transfer function, $\\pi _{\\Theta }(a|s)$ indicates the probability of taking action $a$ under the state $s$ , $R(a_t)$ is the expected reward of action $a$ at time step $t$ . According to REINFORCE policy gradient algorithm BIBREF13 , we update the policy gradient by the way of equation 9. ",
"$$\\Theta \\leftarrow \\Theta + \\alpha \\sum _{t}R(a_t)\\nabla _{\\Theta }\\log \\pi _{\\Theta }(a|s)$$ (Eq. 20) ",
"As the global encoder and the entity selector are correlated mutually, we train them jointly after pre-training the two networks. The details of the joint learning are presented in Algorithm 1.",
"[t] The Policy Learning for Entity Selector [1] Training data include multiple documents $D = \\lbrace D_1, D_2, ..., D_N\\rbrace $ The target entity for mentions $\\Gamma = \\lbrace T_1, T_2, ..., T_N\\rbrace $ ",
"Initialize the policy network parameter $\\Theta $ , global LSTM network parameter $\\Phi $ ; $D_k$ in $D$ Generate the candidate set for each mention Divide the mentions in $D_k$ into multiple sequences $S = \\lbrace S_1, S_2, ..., S_N\\rbrace $ ; $S_k$ in $S$ Rank the mentions $M = \\lbrace m_1, m_2, ..., m_n\\rbrace $ in $S_k$ based on the local similarity; $\\Phi $0 in $\\Phi $1 Sample the target entity $\\Phi $2 for $\\Phi $3 with $\\Phi $4 ; Input the $\\Phi $5 and $\\Phi $6 to global LSTM network; $\\Phi $7 End of sampling, update parameters Compute delayed reward $\\Phi $8 for each action; Update the parameter $\\Phi $9 of policy network:",
" $\\Theta \\leftarrow \\Theta + \\alpha \\sum _{t}R(a_t)\\nabla _{\\Theta }\\log \\pi _{\\Theta }(a|s)$ ",
"Update the parameter $\\Phi $ in the global LSTM network"
],
[
"In order to evaluate the effectiveness of our method, we train the RLEL model and validate it on a series of popular datasets that are also used by BIBREF0 , BIBREF1 . To avoid overfitting with one dataset, we use both AIDA-Train and Wikipedia data in the training set. Furthermore, we compare the RLEL with some baseline methods, where our model achieves the state-of-the-art results. We implement our models in Tensorflow and run experiments on 4 Tesla V100 GPU."
],
[
"We conduct experiments on several different types of public datasets including news and encyclopedia corpus. The training set is AIDA-Train and Wikipedia datasets, where AIDA-Train contains 18448 mentions and Wikipedia contains 25995 mentions. In order to compare with the previous methods, we evaluate our model on AIDA-B and other datasets. These datasets are well-known and have been used for the evaluation of most entity linking systems. The statistics of the datasets are shown in Table 1.",
"AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets.",
"ACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents.",
"MSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.)",
"AQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press.",
"WNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset.",
"WNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation.",
"OURSELF-WIKI is crawled by ourselves from Wikipedia pages.",
"During the training of our RLEL model, we select top K candidate entities for each mention to optimize the memory and run time. In the top K candidate list, we define the recall of correct target entity is $R_t$ . According to our statistics, when K is set to 1, $R_t$ is 0.853, when K is 5, $R_t$ is 0.977, when K increases to 10, $R_t$ is 0.993. Empirically, we choose top 5 candidate entities as the input of our RLEL model. For the entity description, there are lots of redundant information in the wikipedia page, to reduce the impact of noise data, we use TextRank algorithm BIBREF19 to select 15 keywords as description of the entity. Simultaneously, we choose 15 words around mention as its context. In the global LSTM network, when the number of mentions does not reach the set length, we adopt the mention padding strategy. In short, we copy the last mention in the sequence until the number of mentions reaches the set length.",
"We set the dimensions of word embedding and entity embedding to 300, where the word embedding and entity embedding are released by BIBREF20 and BIBREF0 respectively. For parameters of the local LSTM network, the number of LSTM cell units is set to 512, the batch size is 64, and the rank margin $\\gamma $ is 0.1. Similarly, in global LSTM network, the number of LSTM cell units is 700 and the batch size is 16. In the above two LSTM networks, the learning rate is set to 1e-3, the probability of dropout is set to 0.8, and the Adam is utilized as optimizer. In addition, we set the number of MLP layers to 4 and extend the priori feature dimension to 50 in the policy network."
],
[
"We compare RLEL with a series of EL systems which report state-of-the-art results on the test datasets. There are various methods including classification model BIBREF17 , rank model BIBREF21 , BIBREF15 and probability graph model BIBREF18 , BIBREF14 , BIBREF22 , BIBREF0 , BIBREF1 . Except that, Cheng $et$ $al.$ BIBREF23 formulate their global decision problem as an Integer Linear Program (ILP) which incorporates the entity-relation inference. Globerson $et$ $al.$ BIBREF24 introduce a multi-focal attention model which allows each candidate to focus on limited mentions, Yamada $et$ $al.$ BIBREF25 propose a word and entity embedding model specifically designed for EL.",
"We use the standard Accuracy, Precision, Recall and F1 at mention level (Micro) as the evaluation metrics: ",
"$$Accuracy = \\frac{|M \\cap M^*|}{|M \\cup M^*|}$$ (Eq. 31) ",
"$$Precision = \\frac{|M \\cap M^*|}{|M|}$$ (Eq. 32) ",
"where $M^*$ is the golden standard set of the linked name mentions, $M$ is the set of linked name mentions outputted by an EL method.",
"Same as previous work, we use in-KB accuracy and micro F1 to evaluate our method. We first test the model on the AIDA-B dataset. From Table 2, we can observe that our model achieves the best result. Previous best results on this dataset are generated by BIBREF0 , BIBREF1 which both built CRF models. They calculate the pairwise scores between all candidate entities. Differently, our model only considers the consistency of the target entities and ignores the relationship between incorrect candidates. The experimental results show that our model can reduce the impact of noise data and improve the accuracy of disambiguation. Apart from experimenting on AIDA-B, we also conduct experiments on several different datasets to verify the generalization performance of our model.",
"From Table 3, we can see that RLEL has achieved relatively good performances on ACE2004, CWEB and WIKI. At the same time, previous models BIBREF0 , BIBREF1 , BIBREF23 achieve better performances on the news datasets such as MSNBC and AQUINT, but their results on encyclopedia datasets such as WIKI are relatively poor. To avoid overfitting with some datasets and improve the robustness of our model, we not only use AIDA-Train but also add Wikipedia data to the training set. In the end, our model achieve the best overall performance.",
"For most existing EL systems, entities with lower frequency are difficult to disambiguate. To gain further insight, we analyze the accuracy of the AIDA-B dataset for situations where gold entities have low popularity. We divide the gold entities according to their pageviews in wikipedia, the statistical disambiguation results are shown in Table 4. Since some pageviews can not be obtained, we only count part of gold entities. The result indicates that our model is still able to work well for low-frequency entities. But for medium-frequency gold entities, our model doesn't work well enough. The most important reason is that other candidate entities corresponding to these medium-frequency gold entities have higher pageviews and local similarities, which makes the model difficult to distinguish."
],
[
"To demonstrate the effects of RLEL, we evaluate our model under different conditions. First, we evaluate the effect of sequence length on global decision making. Second, we assess whether sorting the mentions have a positive effect on the results. Third, we analysis the results of not adding globally encoding during entity selection. Last, we compare our RL selection strategy with the greedy choice.",
"A document may contain multiple topics, so we do not add all mentions to a single sequence. In practice, we add some adjacent mentions to the sequence and use reinforcement learning to select entities from beginning to end. To analysis the impact of the number of mentions on joint disambiguation, we experiment with sequences on different lengths. The results on AIDA-B are shown in Figure 4. We can see that when the sequence is too short or too long, the disambiguation results are both very poor. When the sequence length is less than 3, delay reward can't work in reinforcement learning, and when the sequence length reaches 5 or more, noise data may be added. Finally, we choose the 4 adjacent mentions to form a sequence.",
"In this section, we test whether ranking mentions is helpful for entity selections. At first, we directly input them into the global encoder by the order they appear in the text. We record the disambiguation results and compare them with the method which adopts ranking mentions. As shown in Figure 5a, the model with ranking mentions has achieved better performances on most of datasets, indicating that it is effective to place the mention that with a higher local similarity in front of the sequence. It is worth noting that the effect of ranking mentions is not obvious on the MSNBC dataset, the reason is that most of mentions in MSNBC have similar local similarities, the order of disambiguation has little effect on the final result.",
"Most of previous methods mainly use the similarities between entities to correlate each other, but our model associates them by encoding the selected entity information. To assess whether the global encoding contributes to disambiguation rather than add noise, we compare the performance with and without adding the global information. When the global encoding is not added, the current state only contains the mention context representation, candidate entity representation and feature representation, notably, the selected target entity information is not taken into account. From the results in Figure 5b, we can see that the model with global encoding achieves an improvement of 4% accuracy over the method that without global encoding.",
"To illustrate the necessity for adopting the reinforcement learning for entity selection, we compare two entity selection strategies like BIBREF5 . Specifically, we perform entity selection respectively with reinforcement learning and greedy choice. The greedy choice is to select the entity with largest local similarity from candidate set. But the reinforcement learning selection is guided by delay reward, which has a global perspective. In the comparative experiment, we keep the other conditions consistent, just replace the RL selection with a greedy choice. Based on the results in Figure 5c, we can draw a conclusion that our entity selector perform much better than greedy strategies."
],
[
"Table 5 shows two entity selection examples by our RLEL model. For multiple mentions appearing in the document, we first sort them according to their local similarities, and select the target entities in order by the reinforcement learning model. From the results of sorting and disambiguation, we can see that our model is able to utilize the topical consistency between mentions and make full use of the selected target entity information."
],
[
"The related work can be roughly divided into two groups: entity linking and reinforcement learning."
],
[
"Entity linking falls broadly into two major approaches: local and global disambiguation. Early studies use local models to resolve mentions independently, they usually disambiguate mentions based on lexical matching between the mention's surrounding words and the entity profile in the reference KB. Various methods have been proposed to model mention's local context ranging from binary classification BIBREF17 to rank models BIBREF26 , BIBREF27 . In these methods, a large number of hand-designed features are applied. For some marginal mentions that are difficult to extract features, researchers also exploit the data retrieved by search engines BIBREF28 , BIBREF29 or Wikipedia sentences BIBREF30 . However, the feature engineering and search engine methods are both time-consuming and laborious. Recently, with the popularity of deep learning models, representation learning is utilized to automatically find semantic features BIBREF31 , BIBREF32 . The learned entity representations which by jointly modeling textual contexts and knowledge base are effective in combining multiple sources of information. To make full use of the information contained in representations, we also utilize the pre-trained entity embeddings in our model.",
"In recent years, with the assumption that the target entities of all mentions in a document shall be related, many novel global models for joint linking are proposed. Assuming the topical coherence among mentions, authors in BIBREF33 , BIBREF34 construct factor graph models, which represent the mention and candidate entities as variable nodes, and exploit factor nodes to denote a series of features. Two recent studies BIBREF0 , BIBREF1 use fully-connected pairwise Conditional Random Field(CRF) model and exploit loopy belief propagation to estimate the max-marginal probability. Moreover, PageRank or Random Walk BIBREF35 , BIBREF18 , BIBREF7 are utilized to select the target entity for each mention. The above probabilistic models usually need to predefine a lot of features and are difficult to calculate the max-marginal probability as the number of nodes increases. In order to automatically learn features from the data, Cao et al. BIBREF9 applies Graph Convolutional Network to flexibly encode entity graphs. However, the graph-based methods are computationally expensive because there are lots of candidate entity nodes in the graph.",
"To reduce the calculation between candidate entity pairs, Globerson et al. BIBREF24 introduce a coherence model with an attention mechanism, where each mention only focus on a fixed number of mentions. Unfortunately, choosing the number of attention mentions is not easy in practice. Two recent studies BIBREF8 , BIBREF36 finish linking all mentions by scanning the pairs of mentions at most once, they assume each mention only needs to be consistent with one another mention in the document. The limitation of their method is that the consistency information is too sparse, resulting in low confidence. Similar to us, Guo et al. BIBREF18 also sort mentions according to the difficulty of disambiguation, but they did not make full use of the information of previously referred entities for the subsequent entity disambiguation. Nguyen et al. BIBREF2 use the sequence model, but they simply encode the results of the greedy choice, and measure the similarities between the global encoding and the candidate entity representations. Their model does not consider the long-term impact of current decisions on subsequent choices, nor does they add the selected target entity information to the current state to help disambiguation."
],
[
"In the last few years, reinforcement learning has emerged as a powerful tool for solving complex sequential decision-making problems. It is well known for its great success in the game field, such as Go BIBREF37 and Atari games BIBREF38 . Recently, reinforcement learning has also been successfully applied to many natural language processing tasks and achieved good performance BIBREF12 , BIBREF39 , BIBREF5 . Feng et al. BIBREF5 used reinforcement learning for relation classification task by filtering out the noisy data from the sentence bag and they achieved huge improvements compared with traditional classifiers. Zhang et al. BIBREF40 applied the reinforcement learning on sentence representation by automatically discovering task-relevant structures. To automatic taxonomy induction from a set of terms, Han et al. BIBREF41 designed an end-to-end reinforcement learning model to determine which term to select and where to place it on the taxonomy, which effectively reduced the error propagation between two phases. Inspired by the above works, we also add reinforcement learning to our framework."
],
[
"In this paper we consider entity linking as a sequence decision problem and present a reinforcement learning based model. Our model learns the policy on selecting target entities in a sequential manner and makes decisions based on current state and previous ones. By utilizing the information of previously referred entities, we can take advantage of global consistency to disambiguate mentions. For each selection result in the current state, it also has a long-term impact on subsequent decisions, which allows learned policy strategy has a global view. In experiments, we evaluate our method on AIDA-B and other well-known datasets, the results show that our system outperforms state-of-the-art solutions. In the future, we would like to use reinforcement learning to detect mentions and determine which mention should be firstly disambiguated in the document.",
"This research is supported by the GS501100001809National Key Research and Development Program of China (No. GS5011000018092018YFB1004703), GS501100001809the Beijing Municipal Science and Technology Project under grant (No. GS501100001809",
"Z181100002718004), and GS501100001809the National Natural Science Foundation of China grants(No. GS50110000180961602466)."
]
]
} | {
"question": [
"How fast is the model compared to baselines?",
"How big is the performance difference between this method and the baseline?",
"What datasets used for evaluation?",
"what are the mentioned cues?"
],
"question_id": [
"9aca4b89e18ce659c905eccc78eda76af9f0072a",
"b0376a7f67f1568a7926eff8ff557a93f434a253",
"dad8cc543a87534751f9f9e308787e1af06f0627",
"0481a8edf795768d062c156875d20b8fb656432c"
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"Entity linking",
"Entity linking",
"Entity linking",
"Entity linking"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"42325ec6f5639d307e01d65ebd24c589954df837"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Comparing with the highest performing baseline: 1.3 points on ACE2004 dataset, 0.6 points on CWEB dataset, and 0.86 points in the average of all scores.",
"evidence": [
"FLOAT SELECTED: Table 3: Compare our model with other baseline methods on different types of datasets. The evaluation metric is micro F1."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Compare our model with other baseline methods on different types of datasets. The evaluation metric is micro F1."
]
}
],
"annotation_id": [
"2846a1ba6ad38fa848bcf90df690ea6e75a070e4"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"AIDA-B",
"ACE2004",
"MSNBC",
"AQUAINT",
"WNED-CWEB",
"WNED-WIKI"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We conduct experiments on several different types of public datasets including news and encyclopedia corpus. The training set is AIDA-Train and Wikipedia datasets, where AIDA-Train contains 18448 mentions and Wikipedia contains 25995 mentions. In order to compare with the previous methods, we evaluate our model on AIDA-B and other datasets. These datasets are well-known and have been used for the evaluation of most entity linking systems. The statistics of the datasets are shown in Table 1.",
"AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets.",
"ACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents.",
"MSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.)",
"AQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press.",
"WNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset.",
"WNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation."
],
"highlighted_evidence": [
"In order to compare with the previous methods, we evaluate our model on AIDA-B and other datasets. ",
"AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets.\n\nACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents.\n\nMSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.)\n\nAQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press.\n\nWNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset.\n\nWNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation."
]
},
{
"unanswerable": false,
"extractive_spans": [
"AIDA-CoNLL",
"ACE2004",
"MSNBC",
"AQUAINT",
"WNED-CWEB",
"WNED-WIKI",
"OURSELF-WIKI"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We conduct experiments on several different types of public datasets including news and encyclopedia corpus. The training set is AIDA-Train and Wikipedia datasets, where AIDA-Train contains 18448 mentions and Wikipedia contains 25995 mentions. In order to compare with the previous methods, we evaluate our model on AIDA-B and other datasets. These datasets are well-known and have been used for the evaluation of most entity linking systems. The statistics of the datasets are shown in Table 1.",
"AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets.",
"ACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents.",
"MSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.)",
"AQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press.",
"WNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset.",
"WNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation.",
"OURSELF-WIKI is crawled by ourselves from Wikipedia pages."
],
"highlighted_evidence": [
"We conduct experiments on several different types of public datasets including news and encyclopedia corpus. ",
"AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets.\n\nACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents.\n\nMSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.)\n\nAQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press.\n\nWNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset.\n\nWNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation.\n\nOURSELF-WIKI is crawled by ourselves from Wikipedia pages."
]
}
],
"annotation_id": [
"007037927f1cabc42b0b0cd366c3fcf15becbf73",
"e9393b6c500f4ea6a8a0cb2df9c7307139c5cb0c"
],
"worker_id": [
"043654eefd60242ac8da08ddc1d4b8d73f86f653",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"output of global LSTM network at time $V_{m_i}^t$5 , which encodes the mention context and target entity information from $V_{m_i}^t$6 to $V_{m_i}^t$7"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"FLOAT SELECTED: Figure 2: The overall structure of our RLEL model. It contains three parts: Local Encoder, Global Encoder and Entity Selector. In this framework, (Vmt ,Vekt ) denotes the concatenation of the mention context vector Vmt and one candidate entity vector Vekt . The policy network selects one entity from the candidate set, and Vat denotes the concatenation of the mention context vector Vmt and the selected entity vector Ve∗t . ht represents the hidden status of Vat , and it will be fed into St+1.",
"Where $\\oplus $ indicates vector concatenation. The $V_{m_i}^t$ and $V_{e_i}^t$ respectively denote the vector of $m_i$ and $e_i$ at time $t$ . For each mention, there are multiple candidate entities correspond to it. With the purpose of comparing the semantic relevance between the mention and each candidate entity at the same time, we copy multiple copies of the mention vector. Formally, we extend $V_{m_i}^t \\in \\mathbb {R}^{1\\times {n}}$ to $V_{m_i}^t{^{\\prime }} \\in \\mathbb {R}^{k\\times {n}}$ and then combine it with $V_{e_i}^t \\in \\mathbb {R}^{k\\times {n}}$ . Since $V_{m_i}^t$ and $V_{m_i}^t$0 are mainly to represent semantic information, we add feature vector $V_{m_i}^t$1 to enrich lexical and statistical features. These features mainly include the popularity of the entity, the edit distance between the entity description and the mention context, the number of identical words in the entity description and the mention context etc. After getting these feature values, we combine them into a vector and add it to the current state. In addition, the global vector $V_{m_i}^t$2 is also added to $V_{m_i}^t$3 . As mentioned in global encoder module, $V_{m_i}^t$4 is the output of global LSTM network at time $V_{m_i}^t$5 , which encodes the mention context and target entity information from $V_{m_i}^t$6 to $V_{m_i}^t$7 . Thus, the state $V_{m_i}^t$8 contains current information and previous decisions, while also covering the semantic representations and a variety of statistical features. Next, the concatenated vector will be fed into the policy network to generate action."
],
"highlighted_evidence": [
"FLOAT SELECTED: Figure 2: The overall structure of our RLEL model. It contains three parts: Local Encoder, Global Encoder and Entity Selector. In this framework, (Vmt ,Vekt ) denotes the concatenation of the mention context vector Vmt and one candidate entity vector Vekt . The policy network selects one entity from the candidate set, and Vat denotes the concatenation of the mention context vector Vmt and the selected entity vector Ve∗t . ht represents the hidden status of Vat , and it will be fed into St+1.",
"As mentioned in global encoder module, $V_{m_i}^t$4 is the output of global LSTM network at time $V_{m_i}^t$5 , which encodes the mention context and target entity information from $V_{m_i}^t$6 to $V_{m_i}^t$7 ."
]
}
],
"annotation_id": [
"af84319f3ae34ff40bb5f030903e56a43afe43ab"
],
"worker_id": [
"043654eefd60242ac8da08ddc1d4b8d73f86f653"
]
}
]
} | {
"caption": [
"Figure 1: Illustration of mentions in the free text and their candidate entities in the knowledge base. Solid black lines point to the correct target entities corresponding to the mentions and to the descriptions of these correct target entities. Solid red lines indicate the consistency between correct target entities and the orange dashed lines denote the consistency between wrong candidate entities.",
"Figure 2: The overall structure of our RLEL model. It contains three parts: Local Encoder, Global Encoder and Entity Selector. In this framework, (Vmt ,Vekt ) denotes the concatenation of the mention context vector Vmt and one candidate entity vector Vekt . The policy network selects one entity from the candidate set, and Vat denotes the concatenation of the mention context vector Vmt and the selected entity vector Ve∗t . ht represents the hidden status of Vat , and it will be fed into St+1.",
"Figure 3: The architecture of policy network. It is a feedforward neural network and the input consists of four parts: mention context representation, candidate entity representation, feature representation, and encoding of the previous decisions.",
"Table 1: Statistics of document and mention numbers on experimental datasets.",
"Table 2: In-KB accuracy result on AIDA-B dataset.",
"Table 3: Compare our model with other baseline methods on different types of datasets. The evaluation metric is micro F1.",
"Figure 4: The performance of models with different sequence lengths on AIDA-B dataset.",
"Table 4: The micro F1 of gold entities with different pageviews on part of AIDA-B dataset.",
"Figure 5: The comparative experiments of RLEL model.",
"Table 5: Entity selection examples by our RLEL model."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"6-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"7-Figure4-1.png",
"7-Table4-1.png",
"8-Figure5-1.png",
"8-Table5-1.png"
]
} |
1909.00542 | Classification Betters Regression in Query-based Multi-document Summarisation Techniques for Question Answering: Macquarie University at BioASQ7b | Task B Phase B of the 2019 BioASQ challenge focuses on biomedical question answering. Macquarie University's participation applies query-based multi-document extractive summarisation techniques to generate a multi-sentence answer given the question and the set of relevant snippets. In past participation we explored the use of regression approaches using deep learning architectures and a simple policy gradient architecture. For the 2019 challenge we experiment with the use of classification approaches with and without reinforcement learning. In addition, we conduct a correlation analysis between various ROUGE metrics and the BioASQ human evaluation scores. | {
"section_name": [
"Introduction",
"Related Work",
"Classification vs. Regression Experiments",
"Deep Learning Models",
"Reinforcement Learning",
"Evaluation Correlation Analysis",
"Submitted Runs",
"Conclusions"
],
"paragraphs": [
[
"The BioASQ Challenge includes a question answering task (Phase B, part B) where the aim is to find the “ideal answer” — that is, an answer that would normally be given by a person BIBREF0. This is in contrast with most other question answering challenges where the aim is normally to give an exact answer, usually a fact-based answer or a list. Given that the answer is based on an input that consists of a biomedical question and several relevant PubMed abstracts, the task can be seen as an instance of query-based multi-document summarisation.",
"As in past participation BIBREF1, BIBREF2, we wanted to test the use of deep learning and reinforcement learning approaches for extractive summarisation. In contrast with past years where the training procedure was based on a regression set up, this year we experiment with various classification set ups. The main contributions of this paper are:",
"We compare classification and regression approaches and show that classification produces better results than regression but the quality of the results depends on the approach followed to annotate the data labels.",
"We conduct correlation analysis between various ROUGE evaluation metrics and the human evaluations conducted at BioASQ and show that Precision and F1 correlate better than Recall.",
"Section SECREF2 briefly introduces some related work for context. Section SECREF3 describes our classification and regression experiments. Section SECREF4 details our experiments using deep learning architectures. Section SECREF5 explains the reinforcement learning approaches. Section SECREF6 shows the results of our correlation analysis between ROUGE scores and human annotations. Section SECREF7 lists the specific runs submitted at BioASQ 7b. Finally, Section SECREF8 concludes the paper."
],
[
"The BioASQ challenge has organised annual challenges on biomedical semantic indexing and question answering since 2013 BIBREF0. Every year there has been a task about semantic indexing (task a) and another about question answering (task b), and occasionally there have been additional tasks. The tasks defined for 2019 are:",
"Large Scale Online Biomedical Semantic Indexing.",
"Biomedical Semantic QA involving Information Retrieval (IR), Question Answering (QA), and Summarisation.",
"Medical Semantic Indexing in Spanish.",
"BioASQ Task 7b consists of two phases. Phase A provides a biomedical question as an input, and participants are expected to find relevant concepts from designated terminologies and ontologies, relevant articles from PubMed, relevant snippets from the relevant articles, and relevant RDF triples from designated ontologies. Phase B provides a biomedical question and a list of relevant articles and snippets, and participant systems are expected to return the exact answers and the ideal answers. The training data is composed of the test data from all previous years, and amounts to 2,747 samples. There has been considerable research on the use of machine learning approaches for tasks related to text summarisation, especially on single-document summarisation. Abstractive approaches normally use an encoder-decoder architecture and variants of this architecture incorporate attention BIBREF3 and pointer-generator BIBREF4. Recent approaches leveraged the use of pre-trained models BIBREF5. Recent extractive approaches to summarisation incorporate recurrent neural networks that model sequences of sentence extractions BIBREF6 and may incorporate an abstractive component and reinforcement learning during the training stage BIBREF7. But relatively few approaches have been proposed for query-based multi-document summarisation. Table TABREF8 summarises the approaches presented in the proceedings of the 2018 BioASQ challenge."
],
[
"Our past participation in BioASQ BIBREF1, BIBREF2 and this paper focus on extractive approaches to summarisation. Our decision to focus on extractive approaches is based on the observation that a relatively large number of sentences from the input snippets has very high ROUGE scores, thus suggesting that human annotators had a general tendency to copy text from the input to generate the target summaries BIBREF1. Our past participating systems used regression approaches using the following framework:",
"Train the regressor to predict the ROUGE-SU4 F1 score of the input sentence.",
"Produce a summary by selecting the top $n$ input sentences.",
"A novelty in the current participation is the introduction of classification approaches using the following framework.",
"Train the classifier to predict the target label (“summary” or “not summary”) of the input sentence.",
"Produce a summary by selecting all sentences predicted as “summary”.",
"If the total number of sentences selected is less than $n$, select $n$ sentences with higher probability of label “summary”.",
"Introducing a classifier makes labelling the training data not trivial, since the target summaries are human-generated and they do not have a perfect mapping to the input sentences. In addition, some samples have multiple reference summaries. BIBREF11 showed that different data labelling approaches influence the quality of the final summary, and some labelling approaches may lead to better results than using regression. In this paper we experiment with the following labelling approaches:",
": Label as “summary” all sentences from the input text that have a ROUGE score above a threshold $t$.",
": Label as “summary” the $m$ input text sentences with highest ROUGE score.",
"As in BIBREF11, The ROUGE score of an input sentence was the ROUGE-SU4 F1 score of the sentence against the set of reference summaries.",
"We conducted cross-validation experiments using various values of $t$ and $m$. Table TABREF26 shows the results for the best values of $t$ and $m$ obtained. The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. To enable a fair comparison we used the same input features in all systems. These input features combine information from the question and the input sentence and are shown in Fig. FIGREF16. The features are based on BIBREF12, and are the same as in BIBREF1, plus the addition of the position of the input snippet. The best SVC and SVR parameters were determined by grid search.",
"Preliminary experiments showed a relatively high number of cases where the classifier did not classify any of the input sentences as “summary”. To solve this problem, and as mentioned above, the summariser used in Table TABREF26 introduces a backoff step that extracts the $n$ sentences with highest predicted values when the summary has less than $n$ sentences. The value of $n$ is as reported in our prior work and shown in Table TABREF25.",
"The results confirm BIBREF11's finding that classification outperforms regression. However, the actual choice of optimal labelling scheme was different: whereas in BIBREF11 the optimal labelling was based on a labelling threshold of 0.1, our experiments show a better result when using the top 5 sentences as the target summary. The reason for this difference might be the fact that BIBREF11 used all sentences from the abstracts of the relevant PubMed articles, whereas we use only the snippets as the input to our summariser. Consequently, the number of input sentences is now much smaller. We therefore report the results of using the labelling schema of top 5 snippets in all subsequent classifier-based experiments of this paper.",
"barchart=[fill=black!20,draw=black] errorbar=[very thin,draw=black!75] sscale=[very thin,draw=black!75]"
],
[
"Based on the findings of Section SECREF3, we apply minimal changes to the deep learning regression models of BIBREF2 to convert them to classification models. In particular, we add a sigmoid activation to the final layer, and use cross-entropy as the loss function. The complete architecture is shown in Fig. FIGREF28.",
"The bottom section of Table TABREF26 shows the results of several variants of the neural architecture. The table includes a neural regressor (NNR) and a neural classifier (NNC). The neural classifier is trained in two set ups: “NNC top 5” uses classification labels as described in Section SECREF3, and “NNC SU4 F1” uses the regression labels, that is, the ROUGE-SU4 F1 scores of each sentence. Of interest is the fact that “NNC SU4 F1” outperforms the neural regressor. We have not explored this further and we presume that the relatively good results are due to the fact that ROUGE values range between 0 and 1, which matches the full range of probability values that can be returned by the sigmoid activation of the classifier final layer.",
"Table TABREF26 also shows the standard deviation across the cross-validation folds. Whereas this standard deviation is fairly large compared with the differences in results, in general the results are compatible with the top part of the table and prior work suggesting that classification-based approaches improve over regression-based approaches."
],
[
"We also experiment with the use of reinforcement learning techniques. Again these experiments are based on BIBREF2, who uses REINFORCE to train a global policy. The policy predictor uses a simple feedforward network with a hidden layer.",
"The results reported by BIBREF2 used ROUGE Recall and indicated no improvement with respect to deep learning architectures. Human evaluation results are preferable over ROUGE but these were made available after the publication of the paper. When comparing the ROUGE and human evaluation results (Table TABREF29), we observe an inversion of the results. In particular, the reinforcement learning approaches (RL) of BIBREF2 receive good human evaluation results, and as a matter of fact they are the best of our runs in two of the batches. In contrast, the regression systems (NNR) fare relatively poorly. Section SECREF6 expands on the comparison between the ROUGE and human evaluation scores.",
"Encouraged by the results of Table TABREF29, we decided to continue with our experiments with reinforcement learning. We use the same features as in BIBREF2, namely the length (in number of sentences) of the summary generated so far, plus the $tf.idf$ vectors of the following:",
"Candidate sentence;",
"Entire input to summarise;",
"Summary generated so far;",
"Candidate sentences that are yet to be processed; and",
"Question.",
"The reward used by REINFORCE is the ROUGE value of the summary generated by the system. Since BIBREF2 observed a difference between the ROUGE values of the Python implementation of ROUGE and the original Perl version (partly because the Python implementation does not include ROUGE-SU4), we compare the performance of our system when trained with each of them. Table TABREF35 summarises some of our experiments. We ran the version trained on Python ROUGE once, and the version trained on Perl twice. The two Perl runs have different results, and one of them clearly outperforms the Python run. However, given the differences of results between the two Perl runs we advice to re-run the experiments multiple times and obtain the mean and standard deviation of the runs before concluding whether there is any statistical difference between the results. But it seems that there may be an improvement of the final evaluation results when training on the Perl ROUGE values, presumably because the final evaluation results are measured using the Perl implementation of ROUGE.",
"We have also tested the use of word embeddings instead of $tf.idf$ as input features to the policy model, while keeping the same neural architecture for the policy (one hidden layer using the same number of hidden nodes). In particular, we use the mean of word embeddings using 100 and 200 dimensions. These word embeddings were pre-trained using word2vec on PubMed documents provided by the organisers of BioASQ, as we did for the architectures described in previous sections. The results, not shown in the paper, indicated no major improvement, and re-runs of the experiments showed different results on different runs. Consequently, our submission to BioASQ included the original system using $tf.idf$ as input features in all batches but batch 2, as described in Section SECREF7."
],
[
"As mentioned in Section SECREF5, there appears to be a large discrepancy between ROUGE Recall and the human evaluations. This section describes a correlation analysis between human and ROUGE evaluations using the runs of all participants to all previous BioASQ challenges that included human evaluations (Phase B, ideal answers). The human evaluation results were scraped from the BioASQ Results page, and the ROUGE results were kindly provided by the organisers. We compute the correlation of each of the ROUGE metrics (recall, precision, F1 for ROUGE-2 and ROUGE-SU4) against the average of the human scores. The correlation metrics are Pearson, Kendall, and a revised Kendall correlation explained below.",
"The Pearson correlation between two variables is computed as the covariance of the two variables divided by the product of their standard deviations. This correlation is a good indication of a linear relation between the two variables, but may not be very effective when there is non-linear correlation.",
"The Spearman rank correlation and the Kendall rank correlation are two of the most popular among metrics that aim to detect non-linear correlations. The Spearman rank correlation between two variables can be computed as the Pearson correlation between the rank values of the two variables, whereas the Kendall rank correlation measures the ordinal association between the two variables using Equation DISPLAY_FORM36.",
"It is useful to account for the fact that the results are from 28 independent sets (3 batches in BioASQ 1 and 5 batches each year between BioASQ 2 and BioASQ 6). We therefore also compute a revised Kendall rank correlation measure that only considers pairs of variable values within the same set. The revised metric is computed using Equation DISPLAY_FORM37, where $S$ is the list of different sets.",
"Table TABREF38 shows the results of all correlation metrics. Overall, ROUGE-2 and ROUGE-SU4 give similar correlation values but ROUGE-SU4 is marginally better. Among precision, recall and F1, both precision and F1 are similar, but precision gives a better correlation. Recall shows poor correlation, and virtually no correlation when using the revised Kendall measure. For reporting the evaluation of results, it will be therefore more useful to use precision or F1. However, given the small difference between precision and F1, and given that precision may favour short summaries when used as a function to optimise in a machine learning setting (e.g. using reinforcement learning), it may be best to use F1 as the metric to optimise.",
"Fig. FIGREF40 shows the scatterplots of ROUGE-SU4 recall, precision and F1 with respect to the average human evaluation. We observe that the relation between ROUGE and the human evaluations is not linear, and that Precision and F1 have a clear correlation."
],
[
"Table TABREF41 shows the results and details of the runs submitted to BioASQ. The table uses ROUGE-SU4 Recall since this is the metric available at the time of writing this paper. However, note that, as explained in Section SECREF6, these results might differ from the final human evaluation results. Therefore we do not comment on the results, other than observing that the “first $n$” baseline produces the same results as the neural regressor. As mentioned in Section SECREF3, the labels used for the classification experiments are the 5 sentences with highest ROUGE-SU4 F1 score."
],
[
"Macquarie University's participation in BioASQ 7 focused on the task of generating the ideal answers. The runs use query-based extractive techniques and we experiment with classification, regression, and reinforcement learning approaches. At the time of writing there were no human evaluation results, and based on ROUGE-F1 scores under cross-validation on the training data we observed that classification approaches outperform regression approaches. We experimented with several approaches to label the individual sentences for the classifier and observed that the optimal labelling policy for this task differed from prior work.",
"We also observed poor correlation between ROUGE-Recall and human evaluation metrics and suggest to use alternative automatic evaluation metrics with better correlation, such as ROUGE-Precision or ROUGE-F1. Given the nature of precision-based metrics which could bias the system towards returning short summaries, ROUGE-F1 is probably more appropriate when using at development time, for example for the reward function used by a reinforcement learning system.",
"Reinforcement learning gives promising results, especially in human evaluations made on the runs submitted to BioASQ 6b. This year we introduced very small changes to the runs using reinforcement learning, and will aim to explore more complex reinforcement learning strategies and more complex neural models in the policy and value estimators."
]
]
} | {
"question": [
"How did the author's work rank among other submissions on the challenge?",
"What approaches without reinforcement learning have been tried?",
"What classification approaches were experimented for this task?",
"Did classification models perform better than previous regression one?"
],
"question_id": [
"b6a4ab009e6f213f011320155a7ce96e713c11cf",
"cfffc94518d64cb3c8789395707e4336676e0345",
"f60629c01f99de3f68365833ee115b95a3388699",
"a7cb4f8e29fd2f3d1787df64cd981a6318b65896"
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"be76304cc653b787c5b7c0d4f88dbfbafd20e537"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "classification, regression, neural methods",
"evidence": [
"We conducted cross-validation experiments using various values of $t$ and $m$. Table TABREF26 shows the results for the best values of $t$ and $m$ obtained. The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. To enable a fair comparison we used the same input features in all systems. These input features combine information from the question and the input sentence and are shown in Fig. FIGREF16. The features are based on BIBREF12, and are the same as in BIBREF1, plus the addition of the position of the input snippet. The best SVC and SVR parameters were determined by grid search.",
"The bottom section of Table TABREF26 shows the results of several variants of the neural architecture. The table includes a neural regressor (NNR) and a neural classifier (NNC). The neural classifier is trained in two set ups: “NNC top 5” uses classification labels as described in Section SECREF3, and “NNC SU4 F1” uses the regression labels, that is, the ROUGE-SU4 F1 scores of each sentence. Of interest is the fact that “NNC SU4 F1” outperforms the neural regressor. We have not explored this further and we presume that the relatively good results are due to the fact that ROUGE values range between 0 and 1, which matches the full range of probability values that can be returned by the sigmoid activation of the classifier final layer."
],
"highlighted_evidence": [
"The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively.",
"The bottom section of Table TABREF26 shows the results of several variants of the neural architecture. The table includes a neural regressor (NNR) and a neural classifier (NNC). "
]
},
{
"unanswerable": false,
"extractive_spans": [
" Support Vector Regression (SVR) and Support Vector Classification (SVC)",
"deep learning regression models of BIBREF2 to convert them to classification models"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We conducted cross-validation experiments using various values of $t$ and $m$. Table TABREF26 shows the results for the best values of $t$ and $m$ obtained. The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. To enable a fair comparison we used the same input features in all systems. These input features combine information from the question and the input sentence and are shown in Fig. FIGREF16. The features are based on BIBREF12, and are the same as in BIBREF1, plus the addition of the position of the input snippet. The best SVC and SVR parameters were determined by grid search.",
"Based on the findings of Section SECREF3, we apply minimal changes to the deep learning regression models of BIBREF2 to convert them to classification models. In particular, we add a sigmoid activation to the final layer, and use cross-entropy as the loss function. The complete architecture is shown in Fig. FIGREF28."
],
"highlighted_evidence": [
"The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively.",
"Based on the findings of Section SECREF3, we apply minimal changes to the deep learning regression models of BIBREF2 to convert them to classification models."
]
}
],
"annotation_id": [
"ada830beff3690f98d83d92a55dc600fd8f87d0c",
"dd13f22ac95caf0d6996852322bdb192ffdf3ba9"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"NNC SU4 F1",
"NNC top 5",
"Support Vector Classification (SVC)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We conducted cross-validation experiments using various values of $t$ and $m$. Table TABREF26 shows the results for the best values of $t$ and $m$ obtained. The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. To enable a fair comparison we used the same input features in all systems. These input features combine information from the question and the input sentence and are shown in Fig. FIGREF16. The features are based on BIBREF12, and are the same as in BIBREF1, plus the addition of the position of the input snippet. The best SVC and SVR parameters were determined by grid search.",
"The bottom section of Table TABREF26 shows the results of several variants of the neural architecture. The table includes a neural regressor (NNR) and a neural classifier (NNC). The neural classifier is trained in two set ups: “NNC top 5” uses classification labels as described in Section SECREF3, and “NNC SU4 F1” uses the regression labels, that is, the ROUGE-SU4 F1 scores of each sentence. Of interest is the fact that “NNC SU4 F1” outperforms the neural regressor. We have not explored this further and we presume that the relatively good results are due to the fact that ROUGE values range between 0 and 1, which matches the full range of probability values that can be returned by the sigmoid activation of the classifier final layer."
],
"highlighted_evidence": [
"The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively.",
"The table includes a neural regressor (NNR) and a neural classifier (NNC). The neural classifier is trained in two set ups: “NNC top 5” uses classification labels as described in Section SECREF3, and “NNC SU4 F1” uses the regression labels, that is, the ROUGE-SU4 F1 scores of each sentence."
]
}
],
"annotation_id": [
"00aa8254441466bf3eb8d92b5cb8e6f0ccba0fcb"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"We compare classification and regression approaches and show that classification produces better results than regression but the quality of the results depends on the approach followed to annotate the data labels."
],
"highlighted_evidence": [
"We compare classification and regression approaches and show that classification produces better results than regression but the quality of the results depends on the approach followed to annotate the data labels."
]
}
],
"annotation_id": [
"74f77e49538c04f04248ecb1687279386942ee72"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Table 1. Summarisation techniques used in BioASQ 6b for the generation of ideal answers. The evaluation result is the human evaluation of the best run.",
"Fig. 2. Architecture of the neural classification and regression systems. A matrix of pre-trained word embeddings (same pre-trained vectors as in Fig. 1) is used to find the embeddings of the words of the input sentence and the question. Then, LSTM chains are used to generate sentence embeddings — the weights of the LSTM chains of input sentence and question are not shared. Then, the sentence position is concatenated to the sentence embedding and the similarity of sentence and question embeddings, implemented as a product. A final layer predicts the label of the sentence.",
"Table 5. Experiments using Perl and Python versions of ROUGE. The Python version used the average of ROUGE-2 and ROUGE-L, whereas the Perl version used ROUGESU4.",
"Table 6. Correlation analysis of evaluation results",
"Table 7. Runs submitted to BioASQ 7b",
"Fig. 3. Scatterplots of ROUGE SU4 evaluation metrics against the average human evaluations."
],
"file": [
"3-Table1-1.png",
"6-Figure2-1.png",
"8-Table5-1.png",
"9-Table6-1.png",
"10-Table7-1.png",
"11-Figure3-1.png"
]
} |
1810.06743 | Marrying Universal Dependencies and Universal Morphology | The Universal Dependencies (UD) and Universal Morphology (UniMorph) projects each present schemata for annotating the morphosyntactic details of language. Each project also provides corpora of annotated text in many languages - UD at the token level and UniMorph at the type level. As each corpus is built by different annotators, language-specific decisions hinder the goal of universal schemata. With compatibility of tags, each project's annotations could be used to validate the other's. Additionally, the availability of both type- and token-level resources would be a boon to tasks such as parsing and homograph disambiguation. To ease this interoperability, we present a deterministic mapping from Universal Dependencies v2 features into the UniMorph schema. We validate our approach by lookup in the UniMorph corpora and find a macro-average of 64.13% recall. We also note incompatibilities due to paucity of data on either side. Finally, we present a critical evaluation of the foundations, strengths, and weaknesses of the two annotation projects. | {
"section_name": [
"Introduction",
"Background: Morphological Inflection",
"Two Schemata, Two Philosophies",
"Universal Dependencies",
"UniMorph",
"Similarities in the annotation",
"UD treebanks and UniMorph tables",
"A Deterministic Conversion",
"Experiments",
"Intrinsic evaluation",
"Extrinsic evaluation",
"Results",
"Related Work",
"Conclusion and Future Work",
"Acknowledgments"
],
"paragraphs": [
[
"The two largest standardized, cross-lingual datasets for morphological annotation are provided by the Universal Dependencies BIBREF1 and Universal Morphology BIBREF2 , BIBREF3 projects. Each project's data are annotated according to its own cross-lingual schema, prescribing how features like gender or case should be marked. The schemata capture largely similar information, so one may want to leverage both UD's token-level treebanks and UniMorph's type-level lookup tables and unify the two resources. This would permit a leveraging of both the token-level UD treebanks and the type-level UniMorph tables of paradigms. Unfortunately, neither resource perfectly realizes its schema. On a dataset-by-dataset basis, they incorporate annotator errors, omissions, and human decisions when the schemata are underspecified; one such example is in fig:disagreement.",
"A dataset-by-dataset problem demands a dataset-by-dataset solution; our task is not to translate a schema, but to translate a resource. Starting from the idealized schema, we create a rule-based tool for converting UD-schema annotations to UniMorph annotations, incorporating language-specific post-edits that both correct infelicities and also increase harmony between the datasets themselves (rather than the schemata). We apply this conversion to the 31 languages with both UD and UniMorph data, and we report our method's recall, showing an improvement over the strategy which just maps corresponding schematic features to each other. Further, we show similar downstream performance for each annotation scheme in the task of morphological tagging.",
"This tool enables a synergistic use of UniMorph and Universal Dependencies, as well as teasing out the annotation discrepancies within and across projects. When one dataset disobeys its schema or disagrees with a related language, the flaws may not be noticed except by such a methodological dive into the resources. When the maintainers of the resources ameliorate these flaws, the resources move closer to the goal of a universal, cross-lingual inventory of features for morphological annotation.",
"The contributions of this work are:"
],
[
"Morphological inflection is the act of altering the base form of a word (the lemma, represented in fixed-width type) to encode morphosyntactic features. As an example from English, prove takes on the form proved to indicate that the action occurred in the past. (We will represent all surface forms in quotation marks.) The process occurs in the majority of the world's widely-spoken languages, typically through meaningful affixes. The breadth of forms created by inflection creates a challenge of data sparsity for natural language processing: The likelihood of observing a particular word form diminishes.",
"A classic result in psycholinguistics BIBREF4 shows that inflectional morphology is a fully productive process. Indeed, it cannot be that humans simply have the equivalent of a lookup table, where they store the inflected forms for retrieval as the syntactic context requires. Instead, there needs to be a mental process that can generate properly inflected words on demand. BIBREF4 showed this insightfully through the wug-test, an experiment where she forced participants to correctly inflect out-of-vocabulary lemmata, such as the novel noun wug.",
"Certain features of a word do not vary depending on its context: In German or Spanish where nouns are gendered, the word for onion will always be grammatically feminine. Thus, to prepare for later discussion, we divide the morphological features of a word into two categories: the modifiable inflectional features and the fixed lexical features.",
"A part of speech (POS) is a coarse syntactic category (like verb) that begets a word's particular menu of lexical and inflectional features. In English, verbs express no gender, and adjectives do not reflect person or number. The part of speech dictates a set of inflectional slots to be filled by the surface forms. Completing these slots for a given lemma and part of speech gives a paradigm: a mapping from slots to surface forms. Regular English verbs have five slots in their paradigm BIBREF5 , which we illustrate for the verb prove, using simple labels for the forms in tab:ptb.",
"A morphosyntactic schema prescribes how language can be annotated—giving stricter categories than our simple labels for prove—and can vary in the level of detail provided. Part of speech tags are an example of a very coarse schema, ignoring details of person, gender, and number. A slightly finer-grained schema for English is the Penn Treebank tagset BIBREF6 , which includes signals for English morphology. For instance, its VBZ tag pertains to the specially inflected 3rd-person singular, present-tense verb form (e.g. proves in tab:ptb).",
"If the tag in a schema is detailed enough that it exactly specifies a slot in a paradigm, it is called a morphosyntactic description (MSD). These descriptions require varying amounts of detail: While the English verbal paradigm is small enough to fit on a page, the verbal paradigm of the Northeast Caucasian language Archi can have over 1500000 slots BIBREF7 ."
],
[
"Unlike the Penn Treebank tags, the UD and UniMorph schemata are cross-lingual and include a fuller lexicon of attribute-value pairs, such as Person: 1. Each was built according to a different set of principles. UD's schema is constructed bottom-up, adapting to include new features when they're identified in languages. UniMorph, conversely, is top-down: A cross-lingual survey of the literature of morphological phenomena guided its design. UniMorph aims to be linguistically complete, containing all known morphosyntactic attributes. Both schemata share one long-term goal: a total inventory for annotating the possible morphosyntactic features of a word."
],
[
"The Universal Dependencies morphological schema comprises part of speech and 23 additional attributes (also called features in UD) annotating meaning or syntax, as well as language-specific attributes. In order to ensure consistent annotation, attributes are included into the general UD schema if they occur in several corpora. Language-specific attributes are used when only one corpus annotates for a specific feature.",
"The UD schema seeks to balance language-specific and cross-lingual concerns. It annotates for both inflectional features such as case and lexical features such as gender. Additionally, the UD schema annotates for features which can be interpreted as derivational in some languages. For example, the Czech UD guidance uses a Coll value for the Number feature to denote mass nouns (for example, \"lidstvo\" \"humankind\" from the root \"lid\" \"people\").",
"UD represents a confederation of datasets BIBREF8 annotated with dependency relationships (which are not the focus of this work) and morphosyntactic descriptions. Each dataset is an annotated treebank, making it a resource of token-level annotations. The schema is guided by these treebanks, with feature names chosen for relevance to native speakers. (In sec:unimorph, we will contrast this with UniMorph's treatment of morphosyntactic categories.) The UD datasets have been used in the CoNLL shared tasks BIBREF9 ."
],
[
"In the Universal Morphological Feature Schema BIBREF10 , there are at least 212 values, spread across 23 attributes. It identifies some attributes that UD excludes like information structure and deixis, as well as providing more values for certain attributes, like 23 different noun classes endemic to Bantu languages. As it is a schema for marking morphology, its part of speech attribute does not have POS values for punctuation, symbols, or miscellany (Punct, Sym, and X in Universal Dependencies).",
"Like the UD schema, the decomposition of a word into its lemma and MSD is directly comparable across languages. Its features are informed by a distinction between universal categories, which are widespread and psychologically real to speakers; and comparative concepts, only used by linguistic typologists to compare languages BIBREF11 . Additionally, it strives for identity of meaning across languages, not simply similarity of terminology. As a prime example, it does not regularly label a dative case for nouns, for reasons explained in depth by BIBREF11 .",
"The UniMorph resources for a language contain complete paradigms extracted from Wiktionary BIBREF12 , BIBREF13 . Word types are annotated to form a database, mapping a lemma–tag pair to a surface form. The schema is explained in detail in BIBREF10 . It has been used in the SIGMORPHON shared task BIBREF14 and the CoNLL–SIGMORPHON shared tasks BIBREF15 , BIBREF16 . Several components of the UniMorph schema have been adopted by UD."
],
[
"While the two schemata annotate different features, their annotations often look largely similar. Consider the attested annotation of the Spanish word mandaba (I/he/she/it) commanded. tab:annotations shows that these annotations share many attributes.",
"Some conversions are straightforward: VERB to V, Mood=Ind to IND, Number=Sing to SG, and Person=3 to 3. One might also suggest mapping Tense=Imp to IPFV, though this crosses semantic categories: IPFV represents the imperfective aspect, whereas Tense=Imp comes from imperfect, the English name often given to Spanish's pasado continuo form. The imperfect is a verb form which combines both past tense and imperfective aspect. UniMorph chooses to split this into the atoms PST and IPFV, while UD unifies them according to the familiar name of the tense."
],
[
"Prima facie, the alignment task may seem trivial. But we've yet to explore the humans in the loop. This conversion is a hard problem because we're operating on idealized schemata. We're actually annotating human decisions—and human mistakes. If both schemata were perfectly applied, their overlapping attributes could be mapped to each other simply, in a cross-lingual and totally general way. Unfortunately, the resources are imperfect realizations of their schemata. The cross-lingual, cross-resource, and within-resource problems that we'll note mean that we need a tailor-made solution for each language.",
"Showcasing their schemata, the Universal Dependencies and UniMorph projects each present large, annotated datasets. UD's v2.1 release BIBREF1 has 102 treebanks in 60 languages. The large resource, constructed by independent parties, evinces problems in the goal of a universal inventory of annotations. Annotators may choose to omit certain values (like the coerced gender of refrescante in fig:disagreement), and they may disagree on how a linguistic concept is encoded. (See, e.g., BIBREF11 's ( BIBREF11 ) description of the dative case.) Additionally, many of the treebanks were created by fully- or semi-automatic conversion from treebanks with less comprehensive annotation schemata than UD BIBREF0 . For instance, the Spanish word vas you go is incorrectly labeled Gender: Fem|Number: Pl because it ends in a character sequence which is common among feminine plural nouns. (Nevertheless, the part of speech field for vas is correct.)",
"UniMorph's development is more centralized and pipelined. Inflectional paradigms are scraped from Wiktionary, annotators map positions in the scraped data to MSDs, and the mapping is automatically applied to all of the scraped paradigms. Because annotators handle languages they are familiar with (or related ones), realization of the schema is also done on a language-by-language basis. Further, the scraping process does not capture lexical aspects that are not inflected, like noun gender in many languages. The schema permits inclusion of these details; their absence is an artifact of the data collection process. Finally, UniMorph records only exist for nouns, verbs, and adjectives, though the schema is broader than these categories.",
"For these reasons, we treat the corpora as imperfect realizations of the schemata. Moreover, we contend that ambiguity in the schemata leave the door open to allow for such imperfections. With no strict guidance, it's natural that annotators would take different paths. Nevertheless, modulo annotator disagreement, we assume that within a particular corpus, one word form will always be consistently annotated.",
"Three categories of annotation difficulty are missing values, language-specific attributes, and multiword expressions."
],
[
"In our work, the goal is not simply to translate one schema into the other, but to translate one resource (the imperfect manifestation of the schema) to match the other. The differences between the schemata and discrepancies in annotation mean that the transformation of annotations from one schema to the other is not straightforward.",
"Two naive options for the conversion are a lookup table of MSDs and a lookup table of the individual attribute-value pairs which comprise the MSDs. The former is untenable: the table of all UD feature combinations (including null features, excluding language-specific attributes) would have 2.445e17 entries. Of course, most combinations won't exist, but this gives a sense of the table's scale. Also, it doesn't leverage the factorial nature of the annotations: constructing the table would require a massive duplication of effort. On the other hand, attribute-value lookup lacks the flexibility to show how a pair of values interacts. Neither approach would handle language- and annotator-specific tendencies in the corpora.",
"Our approach to converting UD MSDs to UniMorph MSDs begins with the attribute-value lookup, then amends it on a language-specific basis. Alterations informed by the MSD and the word form, like insertion, substitution, and deletion, increase the number of agreeing annotations. They are critical for work that examines the MSD monolithically instead of feature-by-feature BIBREF25 , BIBREF26 : Without exact matches, converting the individual tags becomes hollow.",
"Beginning our process, we relied on documentation of the two schemata to create our initial, language-agnostic mapping of individual values. This mapping has 140 pairs in it. Because the mapping was derived purely from the schemata, it is a useful approximation of how well the schemata match up. We note, however, that the mapping does not handle idiosyncrasies like the many uses of dative or features which are represented in UniMorph by argument templates: possession and ergative–absolutive argument marking. The initial step of our conversion is using this mapping to populate a proposed UniMorph MSD.",
"As shown in sec:results, the initial proposal is often frustratingly deficient. Thus we introduce the post-edits. To concoct these, we looked into UniMorph corpora for these languages, compared these to the conversion outputs, and then sought to bring the conversion outputs closer to the annotations in the actual UniMorph corpora. When a form and its lemma existed in both corpora, we could directly inspect how the annotations differed. Our process of iteratively refining the conversion implies a table which exactly maps any combination of UD MSD and its related values (lemma, form, etc.) to a UniMorph MSD, though we do not store the table explicitly.",
"Some conversion rules we've created must be applied before or after others. These sequential dependencies provide conciseness. Our post-editing procedure operates on the initial MSD hypothesis as follows:"
],
[
"We evaluate our tool on two tasks:",
"To be clear, our scope is limited to the schema conversion. Future work will explore NLP tasks that exploit both the created token-level UniMorph data and the existing type-level UniMorph data."
],
[
"We transform all UD data to the UniMorph. We compare the simple lookup-based transformation to the one with linguistically informed post-edits on all languages with both UD and UniMorph data. We then evaluate the recall of MSDs without partial credit.",
"Because the UniMorph tables only possess annotations for verbs, nouns, adjectives, or some combination, we can only examine performance for these parts of speech. We consider two words to be a match if their form and lemma are present in both resources. Syncretism allows a single surface form to realize multiple MSDs (Spanish mandaba can be first- or third-person), so we define success as the computed MSD matching any of the word's UniMorph MSDs. This gives rise to an equation for recall: of the word–lemma pairs found in both resources, how many of their UniMorph-converted MSDs are present in the UniMorph tables?",
"Our problem here is not a learning problem, so the question is ill-posed. There is no training set, and the two resources for a given language make up a test set. The quality of our model—the conversion tool—comes from how well we encode prior knowledge about the relationship between the UD and UniMorph corpora."
],
[
"If the UniMorph-converted treebanks perform differently on downstream tasks, then they convey different information. This signals a failure of the conversion process. As a downstream task, we choose morphological tagging, a critical step to leveraging morphological information on new text.",
"We evaluate taggers trained on the transformed UD data, choosing eight languages randomly from the intersection of UD and UniMorph resources. We report the macro-averaged F1 score of attribute-value pairs on a held-out test set, with official train/validation/test splits provided in the UD treebanks. As a reference point, we also report tagging accuracy on those languages' untransformed data.",
"We use the state-of-the-art morphological tagger of BIBREF0 . It is a factored conditional random field with potentials for each attribute, attribute pair, and attribute transition. The potentials are computed by neural networks, predicting the values of each attribute jointly but not monolithically. Inference with the potentials is performed approximately by loopy belief propagation. We use the authors' hyperparameters.",
"We note a minor implementation detail for the sake of reproducibility. The tagger exploits explicit guidance about the attribute each value pertains to. The UniMorph schema's values are globally unique, but their attributes are not explicit. For example, the UniMorph Masc denotes a masculine gender. We amend the code of BIBREF0 to incorporate attribute identifiers for each UniMorph value."
],
[
"We present the intrinsic task's recall scores in tab:recall. Bear in mind that due to annotation errors in the original corpora (like the vas example from sec:resources), the optimal score is not always $100\\%$ . Some shortcomings of recall come from irremediable annotation discrepancies. Largely, we are hamstrung by differences in choice of attributes to annotate. When one resource marks gender and the other marks case, we can't infer the gender of the word purely from its surface form. The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.",
"There are three other transformations for which we note no improvement here. Because of the problem in Basque argument encoding in the UniMorph dataset—which only contains verbs—we note no improvement in recall on Basque. Irish also does not improve: UD marks gender on nouns, while UniMorph marks case. Adjectives in UD are also underspecified. The verbs, though, are already correct with the simple mapping. Finally, with Dutch, the UD annotations are impoverished compared to the UniMorph annotations, and missing attributes cannot be inferred without external knowledge.",
"For the extrinsic task, the performance is reasonably similar whether UniMorph or UD; see tab:tagging. A large fluctuation would suggest that the two annotations encode distinct information. On the contrary, the similarities suggest that the UniMorph-mapped MSDs have similar content. We recognize that in every case, tagging F1 increased—albeit by amounts as small as $0.16$ points. This is in part due to the information that is lost in the conversion. UniMorph's schema does not indicate the type of pronoun (demonstrative, interrogative, etc.), and when lexical information is not recorded in UniMorph, we delete it from the MSD during transformation. On the other hand, UniMorph's atomic tags have more parts to guess, but they are often related. (E.g. Ipfv always entails Pst in Spanish.) Altogether, these forces seem to have little impact on tagging performance."
],
[
"The goal of a tagset-to-tagset mapping of morphological annotations is shared by the Interset project BIBREF28 . Interset decodes features in the source corpus to a tag interlingua, then encodes that into target corpus features. (The idea of an interlingua is drawn from machine translation, where a prevailing early mindset was to convert to a universal representation, then encode that representation's semantics in the target language. Our approach, by contrast, is a direct flight from the source to the target.) Because UniMorph corpora are noisy, the encoding from the interlingua would have to be rewritten for each target. Further, decoding the UD MSD into the interlingua cannot leverage external information like the lemma and form.",
"The creators of HamleDT sought to harmonize dependency annotations among treebanks, similar to our goal of harmonizing across resources BIBREF29 . The treebanks they sought to harmonize used multiple diverse annotation schemes, which the authors unified under a single scheme.",
" BIBREF30 present mappings into a coarse, universal part of speech for 22 languages. Working with POS tags rather than morphological tags (which have far more dimensions), their space of options to harmonize is much smaller than ours.",
"Our extrinsic evaluation is most in line with the paradigm of BIBREF31 (and similar work therein), who compare syntactic parser performance on UD treebanks annotated with two styles of dependency representation. Our problem differs, though, in that the dependency representations express different relationships, while our two schemata vastly overlap. As our conversion is lossy, we do not appraise the learnability of representations as they did.",
"In addition to using the number of extra rules as a proxy for harmony between resources, one could perform cross-lingual projection of morphological tags BIBREF32 , BIBREF33 . Our approach succeeds even without parallel corpora."
],
[
"We created a tool for annotating Universal Dependencies CoNLL-U files with UniMorph annotations. Our tool is ready to use off-the-shelf today, requires no training, and is deterministic. While under-specification necessitates a lossy and imperfect conversion, ours is interpretable. Patterns of mistakes can be identified and ameliorated.",
"The tool allows a bridge between resources annotated in the Universal Dependencies and Universal Morphology (UniMorph) schemata. As the Universal Dependencies project provides a set of treebanks with token-level annotation, while the UniMorph project releases type-level annotated tables, the newfound compatibility opens up new experiments. A prime example of exploiting token- and type-level data is BIBREF34 . That work presents a part-of-speech (POS) dictionary built from Wiktionary, where the POS tagger is also constrained to options available in their type-level POS dictionary, improving performance. Our transformation means that datasets are prepared for similar experiments with morphological tagging. It would also be reasonable to incorporate this tool as a subroutine to UDPipe BIBREF35 and Udapi BIBREF36 . We leave open the task of converting in the opposite direction, turning UniMorph MSDs into Universal Dependencies MSDs.",
"Because our conversion rules are interpretable, we identify shortcomings in both resources, using each as validation for the other. We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources. These findings will harden both resources and better align them with their goal of universal, cross-lingual annotation."
],
[
"We thank Hajime Senuma and John Sylak-Glassman for early comments in devising the starting language-independent mapping from Universal Dependencies to UniMorph."
]
]
} | {
"question": [
"What are the main sources of recall errors in the mapping?",
"Do they look for inconsistencies between different languages' annotations in UniMorph?",
"Do they look for inconsistencies between different UD treebanks?",
"Which languages do they validate on?"
],
"question_id": [
"642c4704a71fd01b922a0ef003f234dcc7b223cd",
"e477e494fe15a978ff9c0a5f1c88712cdaec0c5c",
"04495845251b387335bf2e77e2c423130f43c7d9",
"564dcaf8d0bcc274ab64c784e4c0f50d7a2c17ee"
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"morphology",
"morphology",
"morphology",
"morphology"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"irremediable annotation discrepancies",
"differences in choice of attributes to annotate",
"The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them",
"the two annotations encode distinct information",
"incorrectly applied UniMorph annotation",
"cross-lingual inconsistency in both resources"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We present the intrinsic task's recall scores in tab:recall. Bear in mind that due to annotation errors in the original corpora (like the vas example from sec:resources), the optimal score is not always $100\\%$ . Some shortcomings of recall come from irremediable annotation discrepancies. Largely, we are hamstrung by differences in choice of attributes to annotate. When one resource marks gender and the other marks case, we can't infer the gender of the word purely from its surface form. The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.",
"Because our conversion rules are interpretable, we identify shortcomings in both resources, using each as validation for the other. We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources. These findings will harden both resources and better align them with their goal of universal, cross-lingual annotation."
],
"highlighted_evidence": [
"irremediable annotation discrepancies",
"Some shortcomings of recall come from irremediable annotation discrepancies. Largely, we are hamstrung by differences in choice of attributes to annotate. When one resource marks gender and the other marks case, we can't infer the gender of the word purely from its surface form. The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.",
"We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources. "
]
}
],
"annotation_id": [
"020ac14a36ff656cccfafcb0e6e869f98de7a78e"
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"Because our conversion rules are interpretable, we identify shortcomings in both resources, using each as validation for the other. We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources. These findings will harden both resources and better align them with their goal of universal, cross-lingual annotation."
],
"highlighted_evidence": [
"Because our conversion rules are interpretable, we identify shortcomings in both resources, using each as validation for the other. We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources."
]
}
],
"annotation_id": [
"1ef9f42e15ec3175a8fe9e36e5fffac30e30986d"
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"The contributions of this work are:"
],
"highlighted_evidence": [
"The contributions of this work are:"
]
}
],
"annotation_id": [
"a810b95038cbcc84945b1fd29cc9ec50fee5dc56"
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Ar, Bg, Ca, Cs, Da, De, En, Es, Eu, Fa, Fi, Fr, Ga, He, Hi, Hu, It, La, Lt, Lv, Nb, Nl, Nn, PL, Pt, Ro, Ru, Sl, Sv, Tr, Uk, Ur",
"evidence": [
"FLOAT SELECTED: Table 3: Token-level recall when converting Universal Dependencies tags to UniMorph tags. CSV refers to the lookup-based system. Post-editing refers to the proposed method."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Token-level recall when converting Universal Dependencies tags to UniMorph tags. CSV refers to the lookup-based system. Post-editing refers to the proposed method."
]
},
{
"unanswerable": false,
"extractive_spans": [
"We apply this conversion to the 31 languages",
"Arabic, Hindi, Lithuanian, Persian, and Russian. ",
"Dutch",
"Spanish"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"FLOAT SELECTED: Table 4: Tagging F1 using UD sentences annotated with either original UD MSDs or UniMorph-converted MSDs",
"A dataset-by-dataset problem demands a dataset-by-dataset solution; our task is not to translate a schema, but to translate a resource. Starting from the idealized schema, we create a rule-based tool for converting UD-schema annotations to UniMorph annotations, incorporating language-specific post-edits that both correct infelicities and also increase harmony between the datasets themselves (rather than the schemata). We apply this conversion to the 31 languages with both UD and UniMorph data, and we report our method's recall, showing an improvement over the strategy which just maps corresponding schematic features to each other. Further, we show similar downstream performance for each annotation scheme in the task of morphological tagging.",
"FLOAT SELECTED: Table 3: Token-level recall when converting Universal Dependencies tags to UniMorph tags. CSV refers to the lookup-based system. Post-editing refers to the proposed method.",
"There are three other transformations for which we note no improvement here. Because of the problem in Basque argument encoding in the UniMorph dataset—which only contains verbs—we note no improvement in recall on Basque. Irish also does not improve: UD marks gender on nouns, while UniMorph marks case. Adjectives in UD are also underspecified. The verbs, though, are already correct with the simple mapping. Finally, with Dutch, the UD annotations are impoverished compared to the UniMorph annotations, and missing attributes cannot be inferred without external knowledge.",
"We present the intrinsic task's recall scores in tab:recall. Bear in mind that due to annotation errors in the original corpora (like the vas example from sec:resources), the optimal score is not always $100\\%$ . Some shortcomings of recall come from irremediable annotation discrepancies. Largely, we are hamstrung by differences in choice of attributes to annotate. When one resource marks gender and the other marks case, we can't infer the gender of the word purely from its surface form. The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.",
"For the extrinsic task, the performance is reasonably similar whether UniMorph or UD; see tab:tagging. A large fluctuation would suggest that the two annotations encode distinct information. On the contrary, the similarities suggest that the UniMorph-mapped MSDs have similar content. We recognize that in every case, tagging F1 increased—albeit by amounts as small as $0.16$ points. This is in part due to the information that is lost in the conversion. UniMorph's schema does not indicate the type of pronoun (demonstrative, interrogative, etc.), and when lexical information is not recorded in UniMorph, we delete it from the MSD during transformation. On the other hand, UniMorph's atomic tags have more parts to guess, but they are often related. (E.g. Ipfv always entails Pst in Spanish.) Altogether, these forces seem to have little impact on tagging performance."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Tagging F1 using UD sentences annotated with either original UD MSDs or UniMorph-converted MSDs",
"We apply this conversion to the 31 languages",
"FLOAT SELECTED: Table 3: Token-level recall when converting Universal Dependencies tags to UniMorph tags. CSV refers to the lookup-based system. Post-editing refers to the proposed method.",
"Finally, with Dutch, the UD annotations are impoverished compared to the UniMorph annotations, and missing attributes cannot be inferred without external knowledge.",
"Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.",
"UniMorph's atomic tags have more parts to guess, but they are often related. (E.g. Ipfv always entails Pst in Spanish.) Altogether, these forces seem to have little impact on tagging performance."
]
}
],
"annotation_id": [
"1d39c43a1873cde6fd7b76dae134a1dc84f55f52",
"253ef0cc299e30dcfceb74e8526bdf3a76e5fb9c"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
}
]
} | {
"caption": [
"Figure 1: Example of annotation disagreement in UD between two languages on translations of one phrase, reproduced from Malaviya et al. (2018). The final word in each, “refrescante”, is not inflected for gender: It has the same surface form whether masculine or feminine. Only in Portuguese, it is annotated as masculine to reflect grammatical concord with the noun it modifies.",
"Table 1: Inflected forms of the English verb prove, along with their Penn Treebank tags",
"Table 2: Attested annotations for the Spanish verb form “mandaba” “I/he/she/it commanded”. Note that UD separates the part of speech from the remainder of the morphosyntactic description. In each schema, order of the values is irrelevant.",
"Figure 2: Transliterated Persian with a gloss and translation from Karimi-Doostan (2011), annotated in a Persianspecific schema. The light verb construction “latme zadan” (“to damage”) has been spread across the sentence. Multiword constructions like this are a challenge for word-level tagging schemata.",
"Table 3: Token-level recall when converting Universal Dependencies tags to UniMorph tags. CSV refers to the lookup-based system. Post-editing refers to the proposed method.",
"Table 4: Tagging F1 using UD sentences annotated with either original UD MSDs or UniMorph-converted MSDs"
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"4-Table2-1.png",
"5-Figure2-1.png",
"8-Table3-1.png",
"8-Table4-1.png"
]
} |
1909.02764 | Towards Multimodal Emotion Recognition in German Speech Events in Cars using Transfer Learning | The recognition of emotions by humans is a complex process which considers multiple interacting signals such as facial expressions and both prosody and semantic content of utterances. Commonly, research on automatic recognition of emotions is, with few exceptions, limited to one modality. We describe an in-car experiment for emotion recognition from speech interactions for three modalities: the audio signal of a spoken interaction, the visual signal of the driver's face, and the manually transcribed content of utterances of the driver. We use off-the-shelf tools for emotion detection in audio and face and compare that to a neural transfer learning approach for emotion recognition from text which utilizes existing resources from other domains. We see that transfer learning enables models based on out-of-domain corpora to perform well. This method contributes up to 10 percentage points in F1, with up to 76 micro-average F1 across the emotions joy, annoyance and insecurity. Our findings also indicate that off-the-shelf-tools analyzing face and audio are not ready yet for emotion detection in in-car speech interactions without further adjustments. | {
"section_name": [
"Introduction",
"Related Work ::: Facial Expressions",
"Related Work ::: Acoustic",
"Related Work ::: Text",
"Data set Collection",
"Data set Collection ::: Study Setup and Design",
"Data set Collection ::: Procedure",
"Data set Collection ::: Data Analysis",
"Methods ::: Emotion Recognition from Facial Expressions",
"Methods ::: Emotion Recognition from Audio Signal",
"Methods ::: Emotion Recognition from Transcribed Utterances",
"Results ::: Facial Expressions and Audio",
"Results ::: Text from Transcribed Utterances",
"Results ::: Text from Transcribed Utterances ::: Experiment 1: In-Domain application",
"Results ::: Text from Transcribed Utterances ::: Experiment 2: Simple Out-Of-Domain application",
"Results ::: Text from Transcribed Utterances ::: Experiment 3: Transfer Learning application",
"Summary & Future Work",
"Acknowledgment"
],
"paragraphs": [
[
"Automatic emotion recognition is commonly understood as the task of assigning an emotion to a predefined instance, for example an utterance (as audio signal), an image (for instance with a depicted face), or a textual unit (e.g., a transcribed utterance, a sentence, or a Tweet). The set of emotions is often following the original definition by Ekman Ekman1992, which includes anger, fear, disgust, sadness, joy, and surprise, or the extension by Plutchik Plutchik1980 who adds trust and anticipation.",
"Most work in emotion detection is limited to one modality. Exceptions include Busso2004 and Sebe2005, who investigate multimodal approaches combining speech with facial information. Emotion recognition in speech can utilize semantic features as well BIBREF0. Note that the term “multimodal” is also used beyond the combination of vision, audio, and text. For example, Soleymani2012 use it to refer to the combination of electroencephalogram, pupillary response and gaze distance.",
"In this paper, we deal with the specific situation of car environments as a testbed for multimodal emotion recognition. This is an interesting environment since it is, to some degree, a controlled environment: Dialogue partners are limited in movement, the degrees of freedom for occurring events are limited, and several sensors which are useful for emotion recognition are already integrated in this setting. More specifically, we focus on emotion recognition from speech events in a dialogue with a human partner and with an intelligent agent.",
"Also from the application point of view, the domain is a relevant choice: Past research has shown that emotional intelligence is beneficial for human computer interaction. Properly processing emotions in interactions increases the engagement of users and can improve performance when a specific task is to be fulfilled BIBREF1, BIBREF2, BIBREF3, BIBREF4. This is mostly based on the aspect that machines communicating with humans appear to be more trustworthy when they show empathy and are perceived as being natural BIBREF3, BIBREF5, BIBREF4.",
"Virtual agents play an increasingly important role in the automotive context and the speech modality is increasingly being used in cars due to its potential to limit distraction. It has been shown that adapting the in-car speech interaction system according to the drivers' emotional state can help to enhance security, performance as well as the overall driving experience BIBREF6, BIBREF7.",
"With this paper, we investigate how each of the three considered modalitites, namely facial expressions, utterances of a driver as an audio signal, and transcribed text contributes to the task of emotion recognition in in-car speech interactions. We focus on the five emotions of joy, insecurity, annoyance, relaxation, and boredom since terms corresponding to so-called fundamental emotions like fear have been shown to be associated to too strong emotional states than being appropriate for the in-car context BIBREF8. Our first contribution is the description of the experimental setup for our data collection. Aiming to provoke specific emotions with situations which can occur in real-world driving scenarios and to induce speech interactions, the study was conducted in a driving simulator. Based on the collected data, we provide baseline predictions with off-the-shelf tools for face and speech emotion recognition and compare them to a neural network-based approach for emotion recognition from text. Our second contribution is the introduction of transfer learning to adapt models trained on established out-of-domain corpora to our use case. We work on German language, therefore the transfer consists of a domain and a language transfer."
],
[
"A common approach to encode emotions for facial expressions is the facial action coding system FACS BIBREF9, BIBREF10, BIBREF11. As the reliability and reproducability of findings with this method have been critically discussed BIBREF12, the trend has increasingly shifted to perform the recognition directly on images and videos, especially with deep learning. For instance, jung2015joint developed a model which considers temporal geometry features and temporal appearance features from image sequences. kim2016hierarchical propose an ensemble of convolutional neural networks which outperforms isolated networks.",
"In the automotive domain, FACS is still popular. Ma2017 use support vector machines to distinguish happy, bothered, confused, and concentrated based on data from a natural driving environment. They found that bothered and confused are difficult to distinguish, while happy and concentrated are well identified. Aiming to reduce computational cost, Tews2011 apply a simple feature extraction using four dots in the face defining three facial areas. They analyze the variance of the three facial areas for the recognition of happy, anger and neutral. Ihme2018 aim at detecting frustration in a simulator environment. They induce the emotion with specific scenarios and a demanding secondary task and are able to associate specific face movements according to FACS. Paschero2012 use OpenCV (https://opencv.org/) to detect the eyes and the mouth region and track facial movements. They simulate different lightning conditions and apply a multilayer perceptron for the classification task of Ekman's set of fundamental emotions.",
"Overall, we found that studies using facial features usually focus on continuous driver monitoring, often in driver-only scenarios. In contrast, our work investigates the potential of emotion recognition during speech interactions."
],
[
"Past research on emotion recognition from acoustics mainly concentrates on either feature selection or the development of appropriate classifiers. rao2013emotion as well as ververidis2004automatic compare local and global features in support vector machines. Next to such discriminative approaches, hidden Markov models are well-studied, however, there is no agreement on which feature-based classifier is most suitable BIBREF13. Similar to the facial expression modality, recent efforts on applying deep learning have been increased for acoustic speech processing. For instance, lee2015high use a recurrent neural network and palaz2015analysis apply a convolutional neural network to the raw speech signal. Neumann2017 as well as Trigeorgis2016 analyze the importance of features in the context of deep learning-based emotion recognition.",
"In the automotive sector, Boril2011 approach the detection of negative emotional states within interactions between driver and co-driver as well as in calls of the driver towards the automated spoken dialogue system. Using real-world driving data, they find that the combination of acoustic features and their respective Gaussian mixture model scores performs best. Schuller2006 collects 2,000 dialog turns directed towards an automotive user interface and investigate the classification of anger, confusion, and neutral. They show that automatic feature generation and feature selection boost the performance of an SVM-based classifier. Further, they analyze the performance under systematically added noise and develop methods to mitigate negative effects. For more details, we refer the reader to the survey by Schuller2018. In this work, we explore the straight-forward application of domain independent software to an in-car scenario without domain-specific adaptations."
],
[
"Previous work on emotion analysis in natural language processing focuses either on resource creation or on emotion classification for a specific task and domain. On the side of resource creation, the early and influential work of Pennebaker2015 is a dictionary of words being associated with different psychologically relevant categories, including a subset of emotions. Another popular resource is the NRC dictionary by Mohammad2012b. It contains more than 10000 words for a set of discrete emotion classes. Other resources include WordNet Affect BIBREF14 which distinguishes particular word classes. Further, annotated corpora have been created for a set of different domains, for instance fairy tales BIBREF15, Blogs BIBREF16, Twitter BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, Facebook BIBREF22, news headlines BIBREF23, dialogues BIBREF24, literature BIBREF25, or self reports on emotion events BIBREF26 (see BIBREF27 for an overview).",
"To automatically assign emotions to textual units, the application of dictionaries has been a popular approach and still is, particularly in domains without annotated corpora. Another approach to overcome the lack of huge amounts of annotated training data in a particular domain or for a specific topic is to exploit distant supervision: use the signal of occurrences of emoticons or specific hashtags or words to automatically label the data. This is sometimes referred to as self-labeling BIBREF21, BIBREF28, BIBREF29, BIBREF30.",
"A variety of classification approaches have been tested, including SNoW BIBREF15, support vector machines BIBREF16, maximum entropy classification, long short-term memory network, and convolutional neural network models BIBREF18. More recently, the state of the art is the use of transfer learning from noisy annotations to more specific predictions BIBREF29. Still, it has been shown that transferring from one domain to another is challenging, as the way emotions are expressed varies between areas BIBREF27. The approach by Felbo2017 is different to our work as they use a huge noisy data set for pretraining the model while we use small high quality data sets instead.",
"Recently, the state of the art has also been pushed forward with a set of shared tasks, in which the participants with top results mostly exploit deep learning methods for prediction based on pretrained structures like embeddings or language models BIBREF21, BIBREF31, BIBREF20.",
"Our work follows this approach and builds up on embeddings with deep learning. Furthermore, we approach the application and adaption of text-based classifiers to the automotive domain with transfer learning."
],
[
"The first contribution of this paper is the construction of the AMMER data set which we describe in the following. We focus on the drivers' interactions with both a virtual agent as well as a co-driver. To collect the data in a safe and controlled environment and to be able to consider a variety of predefined driving situations, the study was conducted in a driving simulator."
],
[
"The study environment consists of a fixed-base driving simulator running Vires's VTD (Virtual Test Drive, v2.2.0) simulation software (https://vires.com/vtd-vires-virtual-test-drive/). The vehicle has an automatic transmission, a steering wheel and gas and brake pedals. We collect data from video, speech and biosignals (Empatica E4 to record heart rate, electrodermal activity, skin temperature, not further used in this paper) and questionnaires. Two RGB cameras are fixed in the vehicle to capture the drivers face, one at the sun shield above the drivers seat and one in the middle of the dashboard. A microphone is placed on the center console. One experimenter sits next to the driver, the other behind the simulator. The virtual agent accompanying the drive is realized as Wizard-of-Oz prototype which enables the experimenter to manually trigger prerecorded voice samples playing trough the in-car speakers and to bring new content to the center screen. Figure FIGREF4 shows the driving simulator.",
"The experimental setting is comparable to an everyday driving task. Participants are told that the goal of the study is to evaluate and to improve an intelligent driving assistant. To increase the probability of emotions to arise, participants are instructed to reach the destination of the route as fast as possible while following traffic rules and speed limits. They are informed that the time needed for the task would be compared to other participants. The route comprises highways, rural roads, and city streets. A navigation system with voice commands and information on the screen keeps the participants on the predefined track.",
"To trigger emotion changes in the participant, we use the following events: (i) a car on the right lane cutting off to the left lane when participants try to overtake followed by trucks blocking both lanes with a slow overtaking maneuver (ii) a skateboarder who appears unexpectedly on the street and (iii) participants are praised for reaching the destination unexpectedly quickly in comparison to previous participants.",
"Based on these events, we trigger three interactions (Table TABREF6 provides examples) with the intelligent agent (Driver-Agent Interactions, D–A). Pretending to be aware of the current situation, e. g., to recognize unusual driving behavior such as strong braking, the agent asks the driver to explain his subjective perception of these events in detail. Additionally, we trigger two more interactions with the intelligent agent at the beginning and at the end of the drive, where participants are asked to describe their mood and thoughts regarding the (upcoming) drive. This results in five interactions between the driver and the virtual agent.",
"Furthermore, the co-driver asks three different questions during sessions with light traffic and low cognitive demand (Driver-Co-Driver Interactions, D–Co). These questions are more general and non-traffic-related and aim at triggering the participants' memory and fantasy. Participants are asked to describe their last vacation, their dream house and their idea of the perfect job. In sum, there are eight interactions per participant (5 D–A, 3 D–Co)."
],
[
"At the beginning of the study, participants were welcomed and the upcoming study procedure was explained. Subsequently, participants signed a consent form and completed a questionnaire to provide demographic information. After that, the co-driving experimenter started with the instruction in the simulator which was followed by a familiarization drive consisting of highway and city driving and covering different driving maneuvers such as tight corners, lane changing and strong braking. Subsequently, participants started with the main driving task. The drive had a duration of 20 minutes containing the eight previously mentioned speech interactions. After the completion of the drive, the actual goal of improving automatic emotional recognition was revealed and a standard emotional intelligence questionnaire, namely the TEIQue-SF BIBREF32, was handed to the participants. Finally, a retrospective interview was conducted, in which participants were played recordings of their in-car interactions and asked to give discrete (annoyance, insecurity, joy, relaxation, boredom, none, following BIBREF8) was well as dimensional (valence, arousal, dominance BIBREF33 on a 11-point scale) emotion ratings for the interactions and the according situations. We only use the discrete class annotations in this paper."
],
[
"Overall, 36 participants aged 18 to 64 years ($\\mu $=28.89, $\\sigma $=12.58) completed the experiment. This leads to 288 interactions, 180 between driver and the agent and 108 between driver and co-driver. The emotion self-ratings from the participants yielded 90 utterances labeled with joy, 26 with annoyance, 49 with insecurity, 9 with boredom, 111 with relaxation and 3 with no emotion. One example interaction per interaction type and emotion is shown in Table TABREF7. For further experiments, we only use joy, annoyance/anger, and insecurity/fear due to the small sample size for boredom and no emotion and under the assumption that relaxation brings little expressivity."
],
[
"We preprocess the visual data by extracting the sequence of images for each interaction from the point where the agent's or the co-driver's question was completely uttered until the driver's response stops. The average length is 16.3 seconds, with the minimum at 2.2s and the maximum at 54.7s. We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions). It delivers frame-by-frame scores ($\\in [0;100]$) for discrete emotional states of joy, anger and fear. While joy corresponds directly to our annotation, we map anger to our label annoyance and fear to our label insecurity. The maximal average score across all frames constitutes the overall classification for the video sequence. Frames where the software is not able to detect the face are ignored."
],
[
"We extract the audio signal for the same sequence as described for facial expressions and apply an off-the-shelf tool for emotion recognition. The software delivers single classification scores for a set of 24 discrete emotions for the entire utterance. We consider the outputs for the states of joy, anger, and fear, mapping analogously to our classes as for facial expressions. Low-confidence predictions are interpreted as “no emotion”. We accept the emotion with the highest score as the discrete prediction otherwise."
],
[
"For the emotion recognition from text, we manually transcribe all utterances of our AMMER study. To exploit existing and available data sets which are larger than the AMMER data set, we develop a transfer learning approach. We use a neural network with an embedding layer (frozen weights, pre-trained on Common Crawl and Wikipedia BIBREF36), a bidirectional LSTM BIBREF37, and two dense layers followed by a soft max output layer. This setup is inspired by BIBREF38. We use a dropout rate of 0.3 in all layers and optimize with Adam BIBREF39 with a learning rate of $10^{-5}$ (These parameters are the same for all further experiments). We build on top of the Keras library with the TensorFlow backend. We consider this setup our baseline model.",
"We train models on a variety of corpora, namely the common format published by BIBREF27 of the FigureEight (formally known as Crowdflower) data set of social media, the ISEAR data BIBREF40 (self-reported emotional events), and, the Twitter Emotion Corpus (TEC, weakly annotated Tweets with #anger, #disgust, #fear, #happy, #sadness, and #surprise, Mohammad2012). From all corpora, we use instances with labels fear, anger, or joy. These corpora are English, however, we do predictions on German utterances. Therefore, each corpus is preprocessed to German with Google Translate. We remove URLs, user tags (“@Username”), punctuation and hash signs. The distributions of the data sets are shown in Table TABREF12.",
"To adapt models trained on these data, we apply transfer learning as follows: The model is first trained until convergence on one out-of-domain corpus (only on classes fear, joy, anger for compatibility reasons). Then, the parameters of the bi-LSTM layer are frozen and the remaining layers are further trained on AMMER. This procedure is illustrated in Figure FIGREF13"
],
[
"Table TABREF16 shows the confusion matrices for facial and audio emotion recognition on our complete AMMER data set and Table TABREF17 shows the results per class for each method, including facial and audio data and micro and macro averages. The classification from facial expressions yields a macro-averaged $\\text{F}_1$ score of 33 % across the three emotions joy, insecurity, and annoyance (P=0.31, R=0.35). While the classification results for joy are promising (R=43 %, P=57 %), the distinction of insecurity and annoyance from the other classes appears to be more challenging.",
"Regarding the audio signal, we observe a macro $\\text{F}_1$ score of 29 % (P=42 %, R=22 %). There is a bias towards negative emotions, which results in a small number of detected joy predictions (R=4 %). Insecurity and annoyance are frequently confused."
],
[
"The experimental setting for the evaluation of emotion recognition from text is as follows: We evaluate the BiLSTM model in three different experiments: (1) in-domain, (2) out-of-domain and (3) transfer learning. For all experiments we train on the classes anger/annoyance, fear/insecurity and joy. Table TABREF19 shows all results for the comparison of these experimental settings."
],
[
"We first set a baseline by validating our models on established corpora. We train the baseline model on 60 % of each data set listed in Table TABREF12 and evaluate that model with 40 % of the data from the same domain (results shown in the column “In-Domain” in Table TABREF19). Excluding AMMER, we achieve an average micro $\\text{F}_1$ of 68 %, with best results of F$_1$=73 % on TEC. The model trained on our AMMER corpus achieves an F1 score of 57%. This is most probably due to the small size of this data set and the class bias towards joy, which makes up more than half of the data set. These results are mostly in line with Bostan2018."
],
[
"Now we analyze how well the models trained in Experiment 1 perform when applied to our data set. The results are shown in column “Simple” in Table TABREF19. We observe a clear drop in performance, with an average of F$_1$=48 %. The best performing model is again the one trained on TEC, en par with the one trained on the Figure8 data. The model trained on ISEAR performs second best in Experiment 1, it performs worst in Experiment 2."
],
[
"To adapt models trained on previously existing data sets to our particular application, the AMMER corpus, we apply transfer learning. Here, we perform leave-one-out cross validation. As pre-trained models we use each model from Experiment 1 and further optimize with the training subset of each crossvalidation iteration of AMMER. The results are shown in the column “Transfer L.” in Table TABREF19. The confusion matrix is also depicted in Table TABREF16.",
"With this procedure we achieve an average performance of F$_1$=75 %, being better than the results from the in-domain Experiment 1. The best performance of F$_1$=76 % is achieved with the model pre-trained on each data set, except for ISEAR. All transfer learning models clearly outperform their simple out-of-domain counterpart.",
"To ensure that this performance increase is not only due to the larger data set, we compare these results to training the model without transfer on a corpus consisting of each corpus together with AMMER (again, in leave-one-out crossvalidation). These results are depicted in column “Joint C.”. Thus, both settings, “transfer learning” and “joint corpus” have access to the same information.",
"The results show an increase in performance in contrast to not using AMMER for training, however, the transfer approach based on partial retraining the model shows a clear improvement for all models (by 7pp for Figure8, 10pp for EmoInt, 8pp for TEC, 13pp for ISEAR) compared to the ”Joint” setup."
],
[
"We described the creation of the multimodal AMMER data with emotional speech interactions between a driver and both a virtual agent and a co-driver. We analyzed the modalities of facial expressions, acoustics, and transcribed utterances regarding their potential for emotion recognition during in-car speech interactions. We applied off-the-shelf emotion recognition tools for facial expressions and acoustics. For transcribed text, we developed a neural network-based classifier with transfer learning exploiting existing annotated corpora. We find that analyzing transcribed utterances is most promising for classification of the three emotional states of joy, annoyance and insecurity.",
"Our results for facial expressions indicate that there is potential for the classification of joy, however, the states of annoyance and insecurity are not well recognized. Future work needs to investigate more sophisticated approaches to map frame predictions to sequence predictions. Furthermore, movements of the mouth region during speech interactions might negatively influence the classification from facial expressions. Therefore, the question remains how facial expressions can best contribute to multimodal detection in speech interactions.",
"Regarding the classification from the acoustic signal, the application of off-the-shelf classifiers without further adjustments seems to be challenging. We find a strong bias towards negative emotional states for our experimental setting. For instance, the personalization of the recognition algorithm (e. g., mean and standard deviation normalization) could help to adapt the classification for specific speakers and thus to reduce this bias. Further, the acoustic environment in the vehicle interior has special properties and the recognition software might need further adaptations.",
"Our transfer learning-based text classifier shows considerably better results. This is a substantial result in its own, as only one previous method for transfer learning in emotion recognition has been proposed, in which a sentiment/emotion specific source for labels in pre-training has been used, to the best of our knowledge BIBREF29. Other applications of transfer learning from general language models include BIBREF41, BIBREF42. Our approach is substantially different, not being trained on a huge amount of noisy data, but on smaller out-of-domain sets of higher quality. This result suggests that emotion classification systems which work across domains can be developed with reasonable effort.",
"For a productive application of emotion detection in the context of speech events we conclude that a deployed system might perform best with a speech-to-text module followed by an analysis of the text. Further, in this work, we did not explore an ensemble model or the interaction of different modalities. Thus, future work should investigate the fusion of multiple modalities in a single classifier."
],
[
"We thank Laura-Ana-Maria Bostan for discussions and data set preparations. This research has partially been funded by the German Research Council (DFG), project SEAT (KL 2869/1-1)."
]
]
} | {
"question": [
"Does the paper evaluate any adjustment to improve the predicion accuracy of face and audio features?",
"How is face and audio data analysis evaluated?",
"What is the baseline method for the task?",
"What are the emotion detection tools used for audio and face input?"
],
"question_id": [
"f3d0e6452b8d24b7f9db1fd898d1fbe6cd23f166",
"9b1d789398f1f1a603e4741a5eee63ccaf0d4a4f",
"00bcdffff7e055f99aaf1b05cf41c98e2748e948",
"f92ee3c5fce819db540bded3cfcc191e21799cb1"
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"German",
"German",
"German",
"German"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"600f0c923d0043277bfac1962a398d487bdca7fa"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"confusion matrices",
"$\\text{F}_1$ score"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Table TABREF16 shows the confusion matrices for facial and audio emotion recognition on our complete AMMER data set and Table TABREF17 shows the results per class for each method, including facial and audio data and micro and macro averages. The classification from facial expressions yields a macro-averaged $\\text{F}_1$ score of 33 % across the three emotions joy, insecurity, and annoyance (P=0.31, R=0.35). While the classification results for joy are promising (R=43 %, P=57 %), the distinction of insecurity and annoyance from the other classes appears to be more challenging."
],
"highlighted_evidence": [
"Table TABREF16 shows the confusion matrices for facial and audio emotion recognition on our complete AMMER data set and Table TABREF17 shows the results per class for each method, including facial and audio data and micro and macro averages. The classification from facial expressions yields a macro-averaged $\\text{F}_1$ score of 33 % across the three emotions joy, insecurity, and annoyance (P=0.31, R=0.35)."
]
}
],
"annotation_id": [
"9b32c0c17e68ed2a3a61811e6ff7d83bc2caa7d6"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "For the emotion recognition from text they use described neural network as baseline.\nFor audio and face there is no baseline.",
"evidence": [
"For the emotion recognition from text, we manually transcribe all utterances of our AMMER study. To exploit existing and available data sets which are larger than the AMMER data set, we develop a transfer learning approach. We use a neural network with an embedding layer (frozen weights, pre-trained on Common Crawl and Wikipedia BIBREF36), a bidirectional LSTM BIBREF37, and two dense layers followed by a soft max output layer. This setup is inspired by BIBREF38. We use a dropout rate of 0.3 in all layers and optimize with Adam BIBREF39 with a learning rate of $10^{-5}$ (These parameters are the same for all further experiments). We build on top of the Keras library with the TensorFlow backend. We consider this setup our baseline model."
],
"highlighted_evidence": [
"For the emotion recognition from text, we manually transcribe all utterances of our AMMER study. To exploit existing and available data sets which are larger than the AMMER data set, we develop a transfer learning approach. We use a neural network with an embedding layer (frozen weights, pre-trained on Common Crawl and Wikipedia BIBREF36), a bidirectional LSTM BIBREF37, and two dense layers followed by a soft max output layer. This setup is inspired by BIBREF38. We use a dropout rate of 0.3 in all layers and optimize with Adam BIBREF39 with a learning rate of $10^{-5}$ (These parameters are the same for all further experiments). We build on top of the Keras library with the TensorFlow backend. We consider this setup our baseline model."
]
}
],
"annotation_id": [
"d7c7133b07c598abc8e12d2366753d72e8b02f3c"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We preprocess the visual data by extracting the sequence of images for each interaction from the point where the agent's or the co-driver's question was completely uttered until the driver's response stops. The average length is 16.3 seconds, with the minimum at 2.2s and the maximum at 54.7s. We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions). It delivers frame-by-frame scores ($\\in [0;100]$) for discrete emotional states of joy, anger and fear. While joy corresponds directly to our annotation, we map anger to our label annoyance and fear to our label insecurity. The maximal average score across all frames constitutes the overall classification for the video sequence. Frames where the software is not able to detect the face are ignored.",
"We extract the audio signal for the same sequence as described for facial expressions and apply an off-the-shelf tool for emotion recognition. The software delivers single classification scores for a set of 24 discrete emotions for the entire utterance. We consider the outputs for the states of joy, anger, and fear, mapping analogously to our classes as for facial expressions. Low-confidence predictions are interpreted as “no emotion”. We accept the emotion with the highest score as the discrete prediction otherwise."
],
"highlighted_evidence": [
" We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions). It delivers frame-by-frame scores ($\\in [0;100]$) for discrete emotional states of joy, anger and fear.",
"We extract the audio signal for the same sequence as described for facial expressions and apply an off-the-shelf tool for emotion recognition. The software delivers single classification scores for a set of 24 discrete emotions for the entire utterance."
]
},
{
"unanswerable": false,
"extractive_spans": [
"cannot be disclosed due to licensing restrictions"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We preprocess the visual data by extracting the sequence of images for each interaction from the point where the agent's or the co-driver's question was completely uttered until the driver's response stops. The average length is 16.3 seconds, with the minimum at 2.2s and the maximum at 54.7s. We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions). It delivers frame-by-frame scores ($\\in [0;100]$) for discrete emotional states of joy, anger and fear. While joy corresponds directly to our annotation, we map anger to our label annoyance and fear to our label insecurity. The maximal average score across all frames constitutes the overall classification for the video sequence. Frames where the software is not able to detect the face are ignored."
],
"highlighted_evidence": [
"We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions)."
]
}
],
"annotation_id": [
"050ddcbace29bfd6201c7b4813158d89c290c7b5",
"65fa4bf0328b2368ffb3570d974e8232d6b98731"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
]
} | {
"caption": [
"Figure 1: The setup of the driving simulator.",
"Table 1: Examples for triggered interactions with translations to English. (D: Driver, A: Agent, Co: Co-Driver)",
"Table 2: Examples from the collected data set (with translation to English). E: Emotion, IT: interaction type with agent (A) and with Codriver (C). J: Joy, A: Annoyance, I: Insecurity, B: Boredom, R: Relaxation, N: No emotion.",
"Figure8 8,419 1,419 9,179 19,017 EmoInt 2,252 1,701 1,616 5,569 ISEAR 1,095 1,096 1,094 3,285 TEC 2,782 1,534 8,132 12,448 AMMER 49 26 90 165",
"Figure 2: Model for Transfer Learning from Text. Grey boxes contain frozen parameters in the corresponding learning step.",
"Figure8 66 55 59 76 EmoInt 62 48 56 76 TEC 73 55 58 76 ISEAR 70 35 59 72 AMMER 57 — — —",
"Table 4: Confusion Matrix for Face Classification and Audio Classification (on full AMMER data) and for transfer learning from text (training set of EmoInt and test set of AMMER). Insecurity, annoyance and joy are the gold labels. Fear, anger and joy are predictions.",
"Table 5: Performance for classification from vision, audio, and transfer learning from text (training set of EmoInt)."
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"6-Figure8,419-1.png",
"6-Figure2-1.png",
"7-Figure66-1.png",
"7-Table4-1.png",
"7-Table5-1.png"
]
} |
1905.11901 | Revisiting Low-Resource Neural Machine Translation: A Case Study | It has been shown that the performance of neural machine translation (NMT) drops starkly in low-resource conditions, underperforming phrase-based statistical machine translation (PBSMT) and requiring large amounts of auxiliary data to achieve competitive results. In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. We discuss some pitfalls to be aware of when training low-resource NMT systems, and recent techniques that have shown to be especially helpful in low-resource settings, resulting in a set of best practices for low-resource NMT. In our experiments on German--English with different amounts of IWSLT14 training data, we show that, without the use of any auxiliary monolingual or multilingual data, an optimized NMT system can outperform PBSMT with far less data than previously claimed. We also apply these techniques to a low-resource Korean-English dataset, surpassing previously reported results by 4 BLEU. | {
"section_name": [
"Introduction",
"Low-Resource Translation Quality Compared Across Systems",
"Improving Low-Resource Neural Machine Translation",
"Mainstream Improvements",
"Language Representation",
"Hyperparameter Tuning",
"Lexical Model",
"Data and Preprocessing",
"PBSMT Baseline",
"NMT Systems",
"Results",
"Conclusions",
"Acknowledgments",
"Hyperparameters",
"Sample Translations"
],
"paragraphs": [
[
"While neural machine translation (NMT) has achieved impressive performance in high-resource data conditions, becoming dominant in the field BIBREF0 , BIBREF1 , BIBREF2 , recent research has argued that these models are highly data-inefficient, and underperform phrase-based statistical machine translation (PBSMT) or unsupervised methods in low-data conditions BIBREF3 , BIBREF4 . In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. Our main contributions are as follows:"
],
[
"Figure FIGREF4 reproduces a plot by BIBREF3 which shows that their NMT system only outperforms their PBSMT system when more than 100 million words (approx. 5 million sentences) of parallel training data are available. Results shown by BIBREF4 are similar, showing that unsupervised NMT outperforms supervised systems if few parallel resources are available. In both papers, NMT systems are trained with hyperparameters that are typical for high-resource settings, and the authors did not tune hyperparameters, or change network architectures, to optimize NMT for low-resource conditions."
],
[
"The bulk of research on low-resource NMT has focused on exploiting monolingual data, or parallel data involving other language pairs. Methods to improve NMT with monolingual data range from the integration of a separately trained language model BIBREF5 to the training of parts of the NMT model with additional objectives, including a language modelling objective BIBREF5 , BIBREF6 , BIBREF7 , an autoencoding objective BIBREF8 , BIBREF9 , or a round-trip objective, where the model is trained to predict monolingual (target-side) training data that has been back-translated into the source language BIBREF6 , BIBREF10 , BIBREF11 . As an extreme case, models that rely exclusively on monolingual data have been shown to work BIBREF12 , BIBREF13 , BIBREF14 , BIBREF4 . Similarly, parallel data from other language pairs can be used to pre-train the network or jointly learn representations BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 .",
"While semi-supervised and unsupervised approaches have been shown to be very effective for some language pairs, their effectiveness depends on the availability of large amounts of suitable auxiliary data, and other conditions being met. For example, the effectiveness of unsupervised methods is impaired when languages are morphologically different, or when training domains do not match BIBREF22 ",
"More broadly, this line of research still accepts the premise that NMT models are data-inefficient and require large amounts of auxiliary data to train. In this work, we want to re-visit this point, and will focus on techniques to make more efficient use of small amounts of parallel training data. Low-resource NMT without auxiliary data has received less attention; work in this direction includes BIBREF23 , BIBREF24 ."
],
[
"We consider the hyperparameters used by BIBREF3 to be our baseline. This baseline does not make use of various advances in NMT architectures and training tricks. In contrast to the baseline, we use a BiDeep RNN architecture BIBREF25 , label smoothing BIBREF26 , dropout BIBREF27 , word dropout BIBREF28 , layer normalization BIBREF29 and tied embeddings BIBREF30 ."
],
[
"Subword representations such as BPE BIBREF31 have become a popular choice to achieve open-vocabulary translation. BPE has one hyperparameter, the number of merge operations, which determines the size of the final vocabulary. For high-resource settings, the effect of vocabulary size on translation quality is relatively small; BIBREF32 report mixed results when comparing vocabularies of 30k and 90k subwords.",
"In low-resource settings, large vocabularies result in low-frequency (sub)words being represented as atomic units at training time, and the ability to learn good high-dimensional representations of these is doubtful. BIBREF33 propose a minimum frequency threshold for subword units, and splitting any less frequent subword into smaller units or characters. We expect that such a threshold reduces the need to carefully tune the vocabulary size to the dataset, leading to more aggressive segmentation on smaller datasets."
],
[
"Due to long training times, hyperparameters are hard to optimize by grid search, and are often re-used across experiments. However, best practices differ between high-resource and low-resource settings. While the trend in high-resource settings is towards using larger and deeper models, BIBREF24 use smaller and fewer layers for smaller datasets. Previous work has argued for larger batch sizes in NMT BIBREF35 , BIBREF36 , but we find that using smaller batches is beneficial in low-resource settings. More aggressive dropout, including dropping whole words at random BIBREF37 , is also likely to be more important. We report results on a narrow hyperparameter search guided by previous work and our own intuition."
],
[
"Finally, we implement and test the lexical model by BIBREF24 , which has been shown to be beneficial in low-data conditions. The core idea is to train a simple feed-forward network, the lexical model, jointly with the original attentional NMT model. The input of the lexical model at time step INLINEFORM0 is the weighted average of source embeddings INLINEFORM1 (the attention weights INLINEFORM2 are shared with the main model). After a feedforward layer (with skip connection), the lexical model's output INLINEFORM3 is combined with the original model's hidden state INLINEFORM4 before softmax computation. INLINEFORM5 ",
" Our implementation adds dropout and layer normalization to the lexical model.",
"",
""
],
[
"We use the TED data from the IWSLT 2014 German INLINEFORM0 English shared translation task BIBREF38 . We use the same data cleanup and train/dev split as BIBREF39 , resulting in 159000 parallel sentences of training data, and 7584 for development.",
"As a second language pair, we evaluate our systems on a Korean–English dataset with around 90000 parallel sentences of training data, 1000 for development, and 2000 for testing.",
"For both PBSMT and NMT, we apply the same tokenization and truecasing using Moses scripts. For NMT, we also learn BPE subword segmentation with 30000 merge operations, shared between German and English, and independently for Korean INLINEFORM0 English.",
"To simulate different amounts of training resources, we randomly subsample the IWSLT training corpus 5 times, discarding half of the data at each step. Truecaser and BPE segmentation are learned on the full training corpus; as one of our experiments, we set the frequency threshold for subword units to 10 in each subcorpus (see SECREF7 ). Table TABREF14 shows statistics for each subcorpus, including the subword vocabulary.",
"Translation outputs are detruecased, detokenized, and compared against the reference with cased BLEU using sacreBLEU BIBREF40 , BIBREF41 . Like BIBREF39 , we report BLEU on the concatenated dev sets for IWSLT 2014 (tst2010, tst2011, tst2012, dev2010, dev2012)."
],
[
"We use Moses BIBREF42 to train a PBSMT system. We use MGIZA BIBREF43 to train word alignments, and lmplz BIBREF44 for a 5-gram LM. Feature weights are optimized on the dev set to maximize BLEU with batch MIRA BIBREF45 – we perform multiple runs where indicated. Unlike BIBREF3 , we do not use extra data for the LM. Both PBSMT and NMT can benefit from monolingual data, so the availability of monolingual data is no longer an exclusive advantage of PBSMT (see SECREF5 )."
],
[
"We train neural systems with Nematus BIBREF46 . Our baseline mostly follows the settings in BIBREF3 ; we use adam BIBREF47 and perform early stopping based on dev set BLEU. We express our batch size in number of tokens, and set it to 4000 in the baseline (comparable to a batch size of 80 sentences used in previous work).",
"We subsequently add the methods described in section SECREF3 , namely the bideep RNN, label smoothing, dropout, tied embeddings, layer normalization, changes to the BPE vocabulary size, batch size, model depth, regularization parameters and learning rate. Detailed hyperparameters are reported in Appendix SECREF7 ."
],
[
"Table TABREF18 shows the effect of adding different methods to the baseline NMT system, on the ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words). Our \"mainstream improvements\" add around 6–7 BLEU in both data conditions.",
"In the ultra-low data condition, reducing the BPE vocabulary size is very effective (+4.9 BLEU). Reducing the batch size to 1000 token results in a BLEU gain of 0.3, and the lexical model yields an additional +0.6 BLEU. However, aggressive (word) dropout (+3.4 BLEU) and tuning other hyperparameters (+0.7 BLEU) has a stronger effect than the lexical model, and adding the lexical model (9) on top of the optimized configuration (8) does not improve performance. Together, the adaptations to the ultra-low data setting yield 9.4 BLEU (7.2 INLINEFORM2 16.6). The model trained on full IWSLT data is less sensitive to our changes (31.9 INLINEFORM3 32.8 BLEU), and optimal hyperparameters differ depending on the data condition. Subsequently, we still apply the hyperparameters that were optimized to the ultra-low data condition (8) to other data conditions, and Korean INLINEFORM4 English, for simplicity.",
"For a comparison with PBSMT, and across different data settings, consider Figure FIGREF19 , which shows the result of PBSMT, our NMT baseline, and our optimized NMT system. Our NMT baseline still performs worse than the PBSMT system for 3.2M words of training data, which is consistent with the results by BIBREF3 . However, our optimized NMT system shows strong improvements, and outperforms the PBSMT system across all data settings. Some sample translations are shown in Appendix SECREF8 .",
"For comparison to previous work, we report lowercased and tokenized results on the full IWSLT 14 training set in Table TABREF20 . Our results far outperform the RNN-based results reported by BIBREF48 , and are on par with the best reported results on this dataset.",
"Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by gu-EtAl:2018:EMNLP1."
],
[
"Our results demonstrate that NMT is in fact a suitable choice in low-data settings, and can outperform PBSMT with far less parallel training data than previously claimed. Recently, the main trend in low-resource MT research has been the better exploitation of monolingual and multilingual resources. Our results show that low-resource NMT is very sensitive to hyperparameters such as BPE vocabulary size, word dropout, and others, and by following a set of best practices, we can train competitive NMT systems without relying on auxiliary resources. This has practical relevance for languages where large amounts of monolingual data, or multilingual data involving related languages, are not available. Even though we focused on only using parallel data, our results are also relevant for work on using auxiliary data to improve low-resource MT. Supervised systems serve as an important baseline to judge the effectiveness of semisupervised or unsupervised approaches, and the quality of supervised systems trained on little data can directly impact semi-supervised workflows, for instance for the back-translation of monolingual data."
],
[
"Rico Sennrich has received funding from the Swiss National Science Foundation in the project CoNTra (grant number 105212_169888). Biao Zhang acknowledges the support of the Baidu Scholarship."
],
[
"Table TABREF23 lists hyperparameters used for the different experiments in the ablation study (Table 2). Hyperparameters were kept constant across different data settings, except for the validation interval and subword vocabulary size (see Table 1)."
],
[
"Table TABREF24 shows some sample translations that represent typical errors of our PBSMT and NMT systems, trained with ultra-low (100k words) and low (3.2M words) amounts of data. For unknown words such as blutbefleckten (`bloodstained') or Spaniern (`Spaniards', `Spanish'), PBSMT systems default to copying, while NMT systems produce translations on a subword-level, with varying success (blue-flect, bleed; spaniers, Spanians). NMT systems learn some syntactic disambiguation even with very little data, for example the translation of das and die as relative pronouns ('that', 'which', 'who'), while PBSMT produces less grammatical translation. On the flip side, the ultra low-resource NMT system ignores some unknown words in favour of a more-or-less fluent, but semantically inadequate translation: erobert ('conquered') is translated into doing, and richtig aufgezeichnet ('registered correctly', `recorded correctly') into really the first thing."
]
]
} | {
"question": [
"what amounts of size were used on german-english?",
"what were their experimental results in the low-resource dataset?",
"what are the methods they compare with in the korean-english dataset?",
"what pitfalls are mentioned in the paper?"
],
"question_id": [
"4547818a3bbb727c4bb4a76554b5a5a7b5c5fedb",
"07d7652ad4a0ec92e6b44847a17c378b0d9f57f5",
"9f3444c9fb2e144465d63abf58520cddd4165a01",
"2348d68e065443f701d8052018c18daa4ecc120e"
],
"nlp_background": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Training data with 159000, 80000, 40000, 20000, 10000 and 5000 sentences, and 7584 sentences for development",
"evidence": [
"We use the TED data from the IWSLT 2014 German INLINEFORM0 English shared translation task BIBREF38 . We use the same data cleanup and train/dev split as BIBREF39 , resulting in 159000 parallel sentences of training data, and 7584 for development.",
"To simulate different amounts of training resources, we randomly subsample the IWSLT training corpus 5 times, discarding half of the data at each step. Truecaser and BPE segmentation are learned on the full training corpus; as one of our experiments, we set the frequency threshold for subword units to 10 in each subcorpus (see SECREF7 ). Table TABREF14 shows statistics for each subcorpus, including the subword vocabulary.",
"FLOAT SELECTED: Table 1: Training corpus size and subword vocabulary size for different subsets of IWSLT14 DE→EN data, and for KO→EN data."
],
"highlighted_evidence": [
"We use the TED data from the IWSLT 2014 German INLINEFORM0 English shared translation task BIBREF38 . We use the same data cleanup and train/dev split as BIBREF39 , resulting in 159000 parallel sentences of training data, and 7584 for development.",
"Table TABREF14 shows statistics for each subcorpus, including the subword vocabulary.",
"FLOAT SELECTED: Table 1: Training corpus size and subword vocabulary size for different subsets of IWSLT14 DE→EN data, and for KO→EN data."
]
},
{
"unanswerable": false,
"extractive_spans": [
"ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Table TABREF18 shows the effect of adding different methods to the baseline NMT system, on the ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words). Our \"mainstream improvements\" add around 6–7 BLEU in both data conditions.",
"In the ultra-low data condition, reducing the BPE vocabulary size is very effective (+4.9 BLEU). Reducing the batch size to 1000 token results in a BLEU gain of 0.3, and the lexical model yields an additional +0.6 BLEU. However, aggressive (word) dropout (+3.4 BLEU) and tuning other hyperparameters (+0.7 BLEU) has a stronger effect than the lexical model, and adding the lexical model (9) on top of the optimized configuration (8) does not improve performance. Together, the adaptations to the ultra-low data setting yield 9.4 BLEU (7.2 INLINEFORM2 16.6). The model trained on full IWSLT data is less sensitive to our changes (31.9 INLINEFORM3 32.8 BLEU), and optimal hyperparameters differ depending on the data condition. Subsequently, we still apply the hyperparameters that were optimized to the ultra-low data condition (8) to other data conditions, and Korean INLINEFORM4 English, for simplicity.",
"FLOAT SELECTED: Table 2: German→English IWSLT results for training corpus size of 100k words and 3.2M words (full corpus). Mean and standard deviation of three training runs reported."
],
"highlighted_evidence": [
"Table TABREF18 shows the effect of adding different methods to the baseline NMT system, on the ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words). Our \"mainstream improvements\" add around 6–7 BLEU in both data conditions.\n\nIn the ultra-low data condition, reducing the BPE vocabulary size is very effecti",
"FLOAT SELECTED: Table 2: German→English IWSLT results for training corpus size of 100k words and 3.2M words (full corpus). Mean and standard deviation of three training runs reported."
]
}
],
"annotation_id": [
"073418dd5dee73e79f085f846b12ab2255d1fba9",
"8ebf6954a9db622ffa0e1a1a578dc757efb66253"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"10.37 BLEU"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by gu-EtAl:2018:EMNLP1."
],
"highlighted_evidence": [
"Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by gu-EtAl:2018:EMNLP1."
]
}
],
"annotation_id": [
"b518fdaf97adaadd15159d3125599dd99ca75555"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"gu-EtAl:2018:EMNLP1"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by gu-EtAl:2018:EMNLP1."
],
"highlighted_evidence": [
"Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by gu-EtAl:2018:EMNLP1."
]
}
],
"annotation_id": [
"2482e2af43d793c30436fba78a147768185b2d29"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"highly data-inefficient",
"underperform phrase-based statistical machine translation"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"While neural machine translation (NMT) has achieved impressive performance in high-resource data conditions, becoming dominant in the field BIBREF0 , BIBREF1 , BIBREF2 , recent research has argued that these models are highly data-inefficient, and underperform phrase-based statistical machine translation (PBSMT) or unsupervised methods in low-data conditions BIBREF3 , BIBREF4 . In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. Our main contributions are as follows:"
],
"highlighted_evidence": [
"While neural machine translation (NMT) has achieved impressive performance in high-resource data conditions, becoming dominant in the field BIBREF0 , BIBREF1 , BIBREF2 , recent research has argued that these models are highly data-inefficient, and underperform phrase-based statistical machine translation (PBSMT) or unsupervised methods in low-data conditions BIBREF3 , BIBREF4 . "
]
}
],
"annotation_id": [
"ed93260f3f867af4f9275e5615fda86474ea51ee"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
]
} | {
"caption": [
"Figure 4: Translations of the first sentence of the test set using NMT system trained on varying amounts of training data. Under low resource conditions, NMT produces fluent output unrelated to the input.",
"Table 1: Training corpus size and subword vocabulary size for different subsets of IWSLT14 DE→EN data, and for KO→EN data.",
"Table 2: German→English IWSLT results for training corpus size of 100k words and 3.2M words (full corpus). Mean and standard deviation of three training runs reported.",
"Figure 2: German→English learning curve, showing BLEU as a function of the amount of parallel training data, for PBSMT and NMT.",
"Table 3: Results on full IWSLT14 German→English data on tokenized and lowercased test set with multi-bleu.perl.",
"Table 4: Korean→English results. Mean and standard deviation of three training runs reported.",
"Table 5: Configurations of NMT systems reported in Table 2. Empty fields indicate that hyperparameter was unchanged compared to previous systems.",
"Table 6: German→English translation examples with phrase-based SMT and NMT systems trained on 100k/3.2M words of parallel data."
],
"file": [
"1-Figure4-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"4-Figure2-1.png",
"5-Table3-1.png",
"5-Table4-1.png",
"10-Table5-1.png",
"11-Table6-1.png"
]
} |
1912.01252 | Facilitating on-line opinion dynamics by mining expressions of causation. The case of climate change debates on The Guardian | News website comment sections are spaces where potentially conflicting opinions and beliefs are voiced. Addressing questions of how to study such cultural and societal conflicts through technological means, the present article critically examines possibilities and limitations of machine-guided exploration and potential facilitation of on-line opinion dynamics. These investigations are guided by a discussion of an experimental observatory for mining and analyzing opinions from climate change-related user comments on news articles from the this http URL. This observatory combines causal mapping methods with computational text analysis in order to mine beliefs and visualize opinion landscapes based on expressions of causation. By (1) introducing digital methods and open infrastructures for data exploration and analysis and (2) engaging in debates about the implications of such methods and infrastructures, notably in terms of the leap from opinion observation to debate facilitation, the article aims to make a practical and theoretical contribution to the study of opinion dynamics and conflict in new media environments. | {
"section_name": [
"Introduction ::: Background",
"Introduction ::: Objective",
"Introduction ::: Data: the communicative setting of TheGuardian.com",
"Mining opinions and beliefs from texts",
"Mining opinions and beliefs from texts ::: Causal mapping methods and the climate change debate",
"Mining opinions and beliefs from texts ::: Automated causation tracking with the Penelope semantic frame extractor",
"Analyses and applications",
"Analyses and applications ::: Aggregation",
"Analyses and applications ::: Spatial renditions of TheGuardian.com's opinion landscape",
"Analyses and applications ::: Spatial renditions of TheGuardian.com's opinion landscape ::: A macro-level overview: causes addressed in the climate change debate",
"Analyses and applications ::: Spatial renditions of TheGuardian.com's opinion landscape ::: Micro-level investigations: opinions on nuclear power and global warming",
"From opinion observation to debate facilitation",
"From opinion observation to debate facilitation ::: Debate facilitation through models of alignment and polarization",
"Conclusion"
],
"paragraphs": [
[
"Over the past two decades, the rise of social media and the digitization of news and discussion platforms have radically transformed how individuals and groups create, process and share news and information. As Alan Rusbridger, former-editor-in-chief of the newspaper The Guardian has it, these technologically-driven shifts in the ways people communicate, organize themselves and express their beliefs and opinions, have",
"empower[ed] those that were never heard, creating a a new form of politics and turning traditional news corporations inside out. It is impossible to think of Donald Trump; of Brexit; of Bernie Sanders; of Podemos; of the growth of the far right in Europe; of the spasms of hope and violent despair in the Middle East and North Africa without thinking also of the total inversion of how news is created, shared and distributed. Much of it is liberating and and inspiring. Some of it is ugly and dark. And something - the centuries-old craft of journalism - is in danger of being lost BIBREF0.",
"Rusbridger's observation that the present media-ecology puts traditional notions of politics, journalism, trust and truth at stake is a widely shared one BIBREF1, BIBREF2, BIBREF3. As such, it has sparked interdisciplinary investigations, diagnoses and ideas for remedies across the economical, socio-political, and technological spectrum, challenging our existing assumptions and epistemologies BIBREF4, BIBREF5. Among these lines of inquiry, particular strands of research from the computational social sciences are addressing pressing questions of how emerging technologies and digital methods might be operationalized to regain a grip on the dynamics that govern the flow of on-line news and its associated multitudes of voices, opinions and conflicts. Could the information circulating on on-line (social) news platforms for instance be mined to better understand and analyze the problems facing our contemporary society? Might such data mining and analysis help us to monitor the growing number of social conflicts and crises due to cultural differences and diverging world-views? And finally, would such an approach potentially facilitate early detection of conflicts and even ways to resolve them before they turn violent?",
"Answering these questions requires further advances in the study of cultural conflict based on digital media data. This includes the development of fine-grained representations of cultural conflict based on theoretically-informed text analysis, the integration of game-theoretical approaches to models of polarization and alignment, as well as the construction of accessible tools and media-monitoring observatories: platforms that foster insight into the complexities of social behaviour and opinion dynamics through automated computational analyses of (social) media data. Through an interdisciplinary approach, the present article aims to make both a practical and theoretical contribution to these aspects of the study of opinion dynamics and conflict in new media environments."
],
[
"The objective of the present article is to critically examine possibilities and limitations of machine-guided exploration and potential facilitation of on-line opinion dynamics on the basis of an experimental data analytics pipeline or observatory for mining and analyzing climate change-related user comments from the news website of The Guardian (TheGuardian.com). Combining insights from the social and political sciences with computational methods for the linguistic analysis of texts, this observatory provides a series of spatial (network) representations of the opinion landscapes on climate change on the basis of causation frames expressed in news website comments. This allows for the exploration of opinion spaces at different levels of detail and aggregation.",
"Technical and theoretical questions related to the proposed method and infrastructure for the exploration and facilitation of debates will be discussed in three sections. The first section concerns notions of how to define what constitutes a belief or opinion and how these can be mined from texts. To this end, an approach based on the automated extraction of semantic frames expressing causation is proposed. The observatory thus builds on the theoretical premise that expressions of causation such as `global warming causes rises in sea levels' can be revelatory for a person or group's underlying belief systems. Through a further technical description of the observatory's data-analytical components, section two of the paper deals with matters of spatially modelling the output of the semantic frame extractor and how this might be achieved without sacrificing nuances of meaning. The final section of the paper, then, discusses how insights gained from technologically observing opinion dynamics can inform conceptual modelling efforts and approaches to on-line opinion facilitation. As such, the paper brings into view and critically evaluates the fundamental conceptual leap from machine-guided observation to debate facilitation and intervention.",
"Through the case examples from The Guardian's website and the theoretical discussions explored in these sections, the paper intends to make a twofold contribution to the fields of media studies, opinion dynamics and computational social science. Firstly, the paper introduces and chains together a number of data analytics components for social media monitoring (and facilitation) that were developed in the context of the <project name anonymized for review> infrastructure project. The <project name anonymized for review> infrastructure makes the components discussed in this paper available as open web services in order to foster reproducibility and further experimentation and development <infrastructure reference URL anonymized for review>. Secondly, and supplementing these technological and methodological gains, the paper addresses a number of theoretical, epistemological and ethical questions that are raised by experimental approaches to opinion exploration and facilitation. This notably includes methodological questions on the preservation of meaning through text and data mining, as well as the role of human interpretation, responsibility and incentivisation in observing and potentially facilitating opinion dynamics."
],
[
"In order to study on-line opinion dynamics and build the corresponding climate change opinion observatory discussed in this paper, a corpus of climate-change related news articles and news website comments was analyzed. Concretely, articles from the ‘climate change’ subsection from the news website of The Guardian dated from 2009 up to April 2019 were processed, along with up to 200 comments and associated metadata for articles where commenting was enabled at the time of publication. The choice for studying opinion dynamics using data from The Guardian is motivated by this news website's prominent position in the media landscape as well as its communicative setting, which is geared towards user engagement. Through this interaction with readers, the news platform embodies many of the recent shifts that characterize our present-day media ecology.",
"TheGuardian.com is generally acknowledged to be one of the UK's leading online newspapers, with 8,2 million unique visitors per month as of May 2013 BIBREF6. The website consists of a core news site, as well as a range of subsections that allow for further classification and navigation of articles. Articles related to climate change can for instance be accessed by navigating through the `News' section, over the subsection `environment', to the subsubsection `climate change' BIBREF7. All articles on the website can be read free of charge, as The Guardian relies on a business model that combines revenues from advertising, voluntary donations and paid subscriptions.",
"Apart from offering high-quality, independent journalism on a range of topics, a distinguishing characteristic of The Guardian is its penchant for reader involvement and engagement. Adopting to the changing media landscape and appropriating business models that fit the transition from print to on-line news media, the Guardian has transformed itself into a platform that enables forms of citizen journalism, blogging, and welcomes readers comments on news articles BIBREF0. In order for a reader to comment on articles, it is required that a user account is made, which provides a user with a unique user name and a user profile page with a stable URL. According to the website's help pages, providing users with an identity that is consistently recognized by the community fosters proper on-line community behaviour BIBREF8. Registered users can post comments on content that is open to commenting, and these comments are moderated by a dedicated moderation team according to The Guardian's community standards and participation guidelines BIBREF9. In support of digital methods and innovative approaches to journalism and data mining, The Guardian has launched an open API (application programming interface) through which developers can access different types of content BIBREF10. It should be noted that at the moment of writing this article, readers' comments are not accessible through this API. For the scientific and educational purposes of this paper, comments were thus consulted using a dedicated scraper.",
"Taking into account this community and technologically-driven orientation, the communicative setting of The Guardian from which opinions are to be mined and the underlying belief system revealed, is defined by articles, participating commenters and comment spheres (that is, the actual comments aggregated by user, individual article or collection of articles) (see Figure FIGREF4).",
"In this setting, articles (and previous comments on those articles) can be commented on by participating commenters, each of which bring to the debate his or her own opinions or belief system. What this belief system might consists of can be inferred on a number of levels, with varying degrees of precision. On the most general level, a generic description of the profile of the average reader of The Guardian can be informative. Such profiles have been compiled by market researchers with the purpose of informing advertisers about the demographic that might be reached through this news website (and other products carrying The Guardian's brand). As of the writing of this article, the audience The Guardian is presented to advertisers as a `progressive' audience:",
"Living in a world of unprecedented societal change, with the public narratives around politics, gender, body image, sexuality and diet all being challenged. The Guardian is committed to reflecting the progressive agenda, and reaching the crowd that uphold those values. It’s helpful that we reach over half of progressives in the UK BIBREF11.",
"A second, equally high-level indicator of the beliefs that might be present on the platform, are the links through which articles on climate change can be accessed. An article on climate change might for instance be consulted through the environment section of the news website, but also through the business section. Assuming that business interests might potentially be at odds with environmental concerns, it could be hypothesized that the particular comment sphere for that article consists of at least two potentially clashing frames of mind or belief systems.",
"However, as will be expanded upon further in this article, truly capturing opinion dynamics requires a more systemic and fine-grained approach. The present article therefore proposes a method for harvesting opinions from the actual comment texts. The presupposition is thereby that comment spheres are marked by a diversity of potentially related opinions and beliefs. Opinions might for instance be connected through the reply structure that marks the comment section of an article, but this connection might also manifest itself on a semantic level (that is, the level of meaning or the actual contents of the comments). To capture this multidimensional, interconnected nature of the comment spheres, it is proposed to represent comment spheres as networks, where the nodes represent opinions and beliefs, and edges the relationships between these beliefs (see the spatial representation of beliefs infra). The use of precision language tools to extract such beliefs and their mutual relationships, as will be explored in the following sections, can open up new pathways of model validation and creation."
],
[
"In traditional experimental settings, survey techniques and associated statistical models provide researchers with established methods to gauge and analyze the opinions of a population. When studying opinion landscapes through on-line social media, however, harvesting beliefs from big textual data such as news website comments and developing or appropriating models for their analysis is a non-trivial task BIBREF12, BIBREF13, BIBREF14.",
"In the present context, two challenges related to data-gathering and text mining need to be addressed: (1) defining what constitutes an expression of an opinion or belief, and (2) associating this definition with a pattern that might be extracted from texts. Recent scholarship in the fields of natural language processing (NLP) and argumentation mining has yielded a range of instruments and methods for the (automatic) identification of argumentative claims in texts BIBREF15, BIBREF16. Adding to these instruments and methods, the present article proposes an approach in which belief systems or opinions on climate change are accessed through expressions of causation."
],
[
"The climate change debate is often characterized by expressions of causation, that is, expressions linking a certain cause with a certain effect. Cultural or societal clashes on climate change might for instance concern diverging assessments of whether global warming is man-made or not BIBREF17. Based on such examples, it can be stated that expressions of causation are closely associated with opinions or beliefs, and that as such, these expressions can be considered a valuable indicator for the range and diversity of the opinions and beliefs that constitute the climate change debate. The observatory under discussion therefore focuses on the extraction and analysis of linguistic patterns called causation frames. As will be further demonstrated in this section, the benefit of this causation-based approach is that it offers a systemic approach to opinion dynamics that comprises different layers of meaning, notably the cognitive or social meaningfulness of patterns on account of their being expressions of causation, as well as further lexical and semantic information that might be used for analysis and comparison.",
"The study of expressions of causation as a method for accessing and assessing belief systems and opinions has been formalized and streamlined since the 1970s. Pioneered by political scientist Robert Axelrod and others, this causal mapping method (also referred to as `cognitive mapping') was introduced as a means of reconstructing and evaluating administrative and political decision-making processes, based on the principle that",
"the notion of causation is vital to the process of evaluating alternatives. Regardless of philosophical difficulties involved in the meaning of causation, people do evaluate complex policy alternatives in terms of the consequences a particular choice would cause, and ultimately of what the sum of these effects would be. Indeed, such causal analysis is built into our language, and it would be very difficult for us to think completely in other terms, even if we tried BIBREF18.",
"Axelrod's causal mapping method comprises a set of conventions to graphically represent networks of causes and effects (the nodes in a network) as well as the qualitative aspects of this relation (the network’s directed edges, notably assertions of whether the causal linkage is positive or negative). These causes and effects are to be extracted from relevant sources by means of a series of heuristics and an encoding scheme (it should be noted that for this task Axelrod had human readers in mind). The graphs resulting from these efforts provide a structural overview of the relations among causal assertions (and thus beliefs):",
"The basic elements of the proposed system are quite simple. The concepts a person uses are represented as points, and the causal links between these concepts are represented as arrows between these points. This gives a pictorial representation of the causal assertions of a person as a graph of points and arrows. This kind of representation of assertions as a graph will be called a cognitive map. The policy alternatives, all of the various causes and effects, the goals, and the ultimate utility of the decision maker can all be thought of as concept variables, and represented as points in the cognitive map. The real power of this approach appears when a cognitive map is pictured in graph form; it is then relatively easy to see how each of the concepts and causal relationships relate to each other, and to see the overall structure of the whole set of portrayed assertions BIBREF18.",
"In order to construct these cognitive maps based on textual information, Margaret Tucker Wrightson provides a set of reading and coding rules for extracting cause concepts, linkages (relations) and effect concepts from expressions in the English language. The assertion `Our present topic is the militarism of Germany, which is maintaining a state of tension in the Baltic Area' might for instance be encoded as follows: `the militarism of Germany' (cause concept), /+/ (a positive relationship), `maintaining a state of tension in the Baltic area' (effect concept) BIBREF19. Emphasizing the role of human interpretation, it is acknowledged that no strict set of rules can capture the entire spectrum of causal assertions:",
"The fact that the English language is as varied as those who use it makes the coder's task complex and difficult. No set of rules will completely solve the problems he or she might encounter. These rules, however, provide the coder with guidelines which, if conscientiously followed, will result in outcomes meeting social scientific standards of comparative validity and reliability BIBREF19.",
"To facilitate the task of encoders, the causal mapping method has gone through various iterations since its original inception, all the while preserving its original premises. Recent software packages have for instance been devised to support the data encoding and drawing process BIBREF20. As such, causal or cognitive mapping has become an established opinion and decision mining method within political science, business and management, and other domains. It has notably proven to be a valuable method for the study of recent societal and cultural conflicts. Thomas Homer-Dixon et al. for instance rely on cognitive-affective maps created from survey data to analyze interpretations of the housing crisis in Germany, Israeli attitudes toward the Western Wall, and moderate versus skeptical positions on climate change BIBREF21. Similarly, Duncan Shaw et al. venture to answer the question of `Why did Brexit happen?' by building causal maps of nine televised debates that were broadcast during the four weeks leading up to the Brexit referendum BIBREF22.",
"In order to appropriate the method of causal mapping to the study of on-line opinion dynamics, it needs to expanded from applications at the scale of human readers and relatively small corpora of archival documents and survey answers, to the realm of `big' textual data and larger quantities of information. This attuning of cognitive mapping methods to the large-scale processing of texts required for media monitoring necessarily involves a degree of automation, as will be explored in the next section."
],
[
"As outlined in the previous section, causal mapping is based on the extraction of so-called cause concepts, (causal) relations, and effect concepts from texts. The complexity of each of these these concepts can range from the relatively simple (as illustrated by the easily-identifiable cause and effect relation in the example of `German militarism' cited earlier), to more complex assertions such as `The development of international cooperation in all fields across the ideological frontiers will gradually remove the hostility and fear that poison international relations', which contains two effect concepts (viz. `the hostility that poisons international relations' and `the fear that poisons international relations'). As such, this statement would have to be encoded as a double relationship BIBREF19.",
"The coding guidelines in BIBREF19 further reflect that extracting cause and effect concepts from texts is an operation that works on both the syntactical and semantic levels of assertions. This can be illustrated by means of the guidelines for analyzing the aforementioned causal assertion on German militarism:",
"1. The first step is the realization of the relationship. Does a subject affect an object? 2. Having recognized that it does, the isolation of the cause and effects concepts is the second step. As the sentence structure indicates, \"the militarism of Germany\" is the causal concept, because it is the initiator of the action, while the direct object clause, \"a state of tension in the Baltic area,\" constitutes that which is somehow influenced, the effect concept BIBREF19.",
"In the field of computational linguistics, from which the present paper borrows part of its methods, this procedure for extracting information related to causal assertions from texts can be considered an instance of an operation called semantic frame extraction BIBREF23. A semantic frame captures a coherent part of the meaning of a sentence in a structured way. As documented in the FrameNet project BIBREF24, the Causation frame is defined as follows:",
"A Cause causes an Effect. Alternatively, an Actor, a participant of a (implicit) Cause, may stand in for the Cause. The entity Affected by the Causation may stand in for the overall Effect situation or event BIBREF25.",
"In a linguistic utterance such as a statement in a news website comment, the Causation frame can be evoked by a series of lexical units, such as `cause', `bring on', etc. In the example `If such a small earthquake CAUSES problems, just imagine a big one!', the Causation frame is triggered by the verb `causes', which therefore is called the frame evoking element. The Cause slot is filled by `a small earthquake', the Effect slot by `problems' BIBREF25.",
"In order to automatically mine cause and effects concepts from the corpus of comments on The Guardian, the present paper uses the Penelope semantic frame extractor: a tool that exploits the fact that semantic frames can be expressed as form-meaning mappings called constructions. Notably, frames were extracted from Guardian comments by focusing on the following lexical units (verbs, prepositions and conjunctions), listed in FrameNet as frame evoking elements of the Causation frame: Cause.v, Due to.prep, Because.c, Because of.prep, Give rise to.v, Lead to.v or Result in.v.",
"As illustrated by the following examples, the strings output by the semantic frame extractor adhere closely to the original utterance, preserving all of the the comments' causation frames real-world noisiness:",
"The output of the semantic frame extractor as such is used as the input for the ensuing pipeline components in the climate change opinion observatory. The aim of a further analysis of these frames is to find patterns in the beliefs and opinions they express. As will be discussed in the following section, which focuses on applications and cases, maintaining semantic nuances in this further analytic process foregrounds the role of models and aggregation levels."
],
[
"Based on the presupposition that relations between causation frames reveal beliefs, the output of the semantic frame extractor creates various opportunities for exploring opinion landscapes and empirically validating conceptual models for opinion dynamics.",
"In general, any alignment of conceptual models and real-world data is an exercise in compromising, as the idealized, abstract nature of models is likely to be at odds with the messiness of the actual data. Finding such a compromise might for instance involve a reduction of the simplicity or elegance of the model, or, on the other hand, an increased aggregation (and thus reduced granularity) of the data.",
"Addressing this challenge, the current section reflects on questions of data modelling, aggregation and meaning by exploring, through case examples, different spatial representations of opinion landscapes mined from the TheGuardian.com's comment sphere. These spatial renditions will be understood as network visualizations in which nodes represent argumentative statements (beliefs) and edges the degree of similarity between these statements. On the most general level, then, such a representation can consists of an overview of all the causes expressed in the corpus of climate change-related Guardian comments. This type of visualization provides a birds-eye view of the entire opinion landscape as mined from the comment texts. In turn, such a general overview might elicit more fine-grained, micro-level investigations, in which a particular cause is singled out and its more specific associated effects are mapped. These macro and micro level overviews come with their own proper potential for theory building and evaluation, as well as distinct requirements for the depth or detail of meaning that needs to be represented. To get the most general sense of an opinion landscape one might for instance be more tolerant of abstract renditions of beliefs (e.g. by reducing statements to their most frequently used terms), but for more fine-grained analysis one requires more context and nuance (e.g. adhering as closely as possible to the original comment)."
],
[
"As follows from the above, one of the most fundamental questions when building automated tools to observe opinion dynamics that potentially aim at advising means of debate facilitation concerns the level of meaning aggregation. A clear argumentative or causal association between, for instance, climate change and catastrophic events such as floods or hurricanes may become detectable by automatic causal frame tracking at the scale of large collections of articles where this association might appear statistically more often, but detection comes with great challenges when the aim is to classify certain sets of only a few statements in more free expression environments such as comment spheres.",
"In other words, the problem of meaning aggregation is closely related to issues of scale and aggregation over utterances. The more fine-grained the semantic resolution is, that is, the more specific the cause or effect is that one is interested in, the less probable it is to observe the same statement twice. Moreover, with every independent variable (such as time, different commenters or user groups, etc.), less data on which fine-grained opinion statements are to be detected is available. In the present case of parsed comments from TheGuardian.com, providing insights into the belief system of individual commenters, even if all their statements are aggregated over time, relies on a relatively small set of argumentative statements. This relative sparseness is in part due to the fact that the scope of the semantic frame extractor is confined to the frame evoking elements listed earlier, thus omitting more implicit assertions of causation (i.e. expressions of causation that can only be derived from context and from reading between the lines).",
"Similarly, as will be explored in the ensuing paragraphs, matters of scale and aggregation determine the types of further linguistic analyses that can be performed on the output of the frame extractor. Within the field of computational linguistics, various techniques have been developed to represent the meaning of words as vectors that capture the contexts in which these words are typically used. Such analyses might reveal patterns of statistical significance, but it is also likely that in creating novel, numerical representations of the original utterances, the semantic structure of argumentatively linked beliefs is lost.",
"In sum, developing opinion observatories and (potential) debate facilitators entails finding a trade-off, or, in fact, a middle way between macro- and micro-level analyses. On the one hand, one needs to leverage automated analysis methods at the scale of larger collections to maximum advantage. But one also needs to integrate opportunities to interactively zoom into specific aspects of interest and provide more fine-grained information at these levels down to the actual statements. This interplay between macro- and micro-level analyses is explored in the case studies below."
],
[
"The main purpose of the observatory under discussion is to provide insight into the belief structures that characterize the opinion landscape on climate change. For reasons outlined above, this raises questions of how to represent opinions and, correspondingly, determining which representation is most suited as the atomic unit of comparison between opinions. In general terms, the desired outcome of further processing of the output of the semantic frame extractor is a network representation in which similar cause or effect strings are displayed in close proximity to one another. A high-level description of the pipeline under discussion thus goes as follows. In a first step, it can be decided whether one wants to map cause statements or effect statements. Next, the selected statements are grouped per commenter (i.e. a list is made of all cause statements or effect statements per commenter). These statements are filtered in order to retain only nouns, adjectives and verbs (thereby also omitting frequently occurring verbs such as ‘to be’). The remaining words are then lemmatized, that is, reduced to their dictionary forms. This output is finally translated into a network representation, whereby nodes represent (aggregated) statements, and edges express the semantic relatedness between statements (based on a set overlap whereby the number of shared lemmata are counted).",
"As illustrated by two spatial renditions that were created using this approach and visualized using the network analysis tool Gephi BIBREF26, the labels assigned to these nodes (lemmata, full statements, or other) can be appropriated to the scope of the analysis."
],
[
"Suppose one wants to get a first idea about the scope and diversity of an opinion landscape, without any preconceived notions of this landscape's structure or composition. One way of doing this would be to map all of the causes that are mentioned in comments related to articles on climate change, that is, creating an overview of all the causes that have been retrieved by the frame extractor in a single representation. Such a representation would not immediately provide the granularity to state what the beliefs or opinions in the debates actually are, but rather, it might inspire a sense of what those opinions might be about, thus pointing towards potentially interesting phenomena that might warrant closer examination.",
"Figure FIGREF10, a high-level overview of the opinion landscape, reveals a number of areas to which opinions and beliefs might pertain. The top-left clusters in the diagram for instance reveal opinions about the role of people and countries, whereas on the right-hand side, we find a complementary cluster that might point to beliefs concerning the influence of high or increased CO2-emissions. In between, there is a cluster on power and energy sources, reflecting the energy debate's association to both issues of human responsibility and CO2 emissions. As such, the overview can already inspire, potentially at best, some very general hypotheses about the types of opinions that figure in the climate change debate."
],
[
"Based on the range of topics on which beliefs are expressed, a micro-level analysis can be conducted to reveal what those beliefs are and, for instance, whether they align or contradict each other. This can be achieved by singling out a cause of interest, and mapping out its associated effects.",
"As revealed by the global overview of the climate change opinion landscape, a portion of the debate concerns power and energy sources. One topic with a particularly interesting role in this debate is nuclear power. Figure FIGREF12 illustrates how a more detailed representation of opinions on this matter can be created by spatially representing all of the effects associated with causes containing the expression `nuclear power'. Again, similar beliefs (in terms of words used in the effects) are positioned closer to each other, thus facilitating the detection of clusters. Commenters on The Guardian for instance express concerns about the deaths or extinction that might be caused by this energy resource. They also voice opinions on its cleanliness, whether or not it might decrease pollution or be its own source of pollution, and how it reduces CO2-emissions in different countries.",
"Whereas the detailed opinion landscape on `nuclear power' is relatively limited in terms of the number of mined opinions, other topics might reveal more elaborate belief systems. This is for instance the case for the phenomenon of `global warming'. As shown in Figure FIGREF13, opinions on global warming are clustered around the idea of `increases', notably in terms of evaporation, drought, heat waves, intensity of cyclones and storms, etc. An adjacent cluster is related to `extremes', such as extreme summers and weather events, but also extreme colds."
],
[
"The observatory introduced in the preceding paragraphs provides preliminary insights into the range and scope of the beliefs that figure in climate change debates on TheGuardian.com. The observatory as such takes a distinctly descriptive stance, and aims to satisfy, at least in part, the information needs of researchers, activists, journalists and other stakeholders whose main concern is to document, investigate and understand on-line opinion dynamics. However, in the current information sphere, which is marked by polarization, misinformation and a close entanglement with real-world conflicts, taking a mere descriptive or neutral stance might not serve every stakeholder's needs. Indeed, given the often skewed relations between power and information, questions arise as to how media observations might in turn be translated into (political, social or economic) action. Knowledge about opinion dynamics might for instance inform interventions that remedy polarization or disarm conflict. In other words, the construction of (social) media observatories unavoidably lifts questions about the possibilities, limitations and, especially, implications of the machine-guided and human-incentivized facilitation of on-line discussions and debates.",
"Addressing these questions, the present paragraph introduces and explores the concept of a debate facilitator, that is, a device that extends the capabilities of the previously discussed observatory to also promote more interesting and constructive discussions. Concretely, we will conceptualize a device that reveals how the personal opinion landscapes of commenters relate to each other (in terms of overlap or lack thereof), and we will discuss what steps might potentially be taken on the basis of such representation to balance the debate. Geared towards possible interventions in the debate, such a device may thus go well beyond the observatory's objectives of making opinion processes and conflicts more transparent, which concomitantly raises a number of serious concerns that need to be acknowledged.",
"On rather fundamental ground, tools that steer debates in one way or another may easily become manipulative and dangerous instruments in the hands of certain interest groups. Various aspects of our daily lives are for instance already implicitly guided by recommender systems, the purpose and impact of which can be rather opaque. For this reason, research efforts across disciplines are directed at scrutinizing and rendering such systems more transparent BIBREF28. Such scrutiny is particularly pressing in the context of interventions on on-line communication platforms, which have already been argued to enforce affective communication styles that feed rather than resolve conflict. The objectives behind any facilitation device should therefore be made maximally transparent and potential biases should be fully acknowledged at every level, from data ingest to the dissemination of results BIBREF29. More concretely, the endeavour of constructing opinion observatories and facilitators foregrounds matters of `openness' of data and tools, security, ensuring data quality and representative sampling, accounting for evolving data legislation and policy, building communities and trust, and envisioning beneficial implications. By documenting the development process for a potential facilitation device, the present paper aims to contribute to these on-going investigations and debates. Furthermore, every effort has been made to protect the identities of the commenters involved. In the words of media and technology visionary Jaron Lanier, developers and computational social scientists entering this space should remain fundamentally aware of the fact that `digital information is really just people in disguise' BIBREF30.",
"With these reservations in mind, the proposed approach can be situated among ongoing efforts that lead from debate observation to facilitation. One such pathway, for instance, involves the construction of filters to detect hate speech, misinformation and other forms of expression that might render debates toxic BIBREF31, BIBREF32. Combined with community outreach, language-based filtering and detection tools have proven to raise awareness among social media users about the nature and potential implications of their on-line contributions BIBREF33. Similarly, advances can be expected from approaches that aim to extend the scope of analysis beyond descriptions of a present debate situation in order to model how a debate might evolve over time and how intentions of the participants could be included in such an analysis.",
"Progress in any of these areas hinges on a further integration of real-world data in the modelling process, as well as a further socio-technical and media-theoretical investigation of how activity on social media platforms and technologies correlate to real-world conflicts. The remainder of this section therefore ventures to explore how conceptual argument communication models for polarization and alignment BIBREF34 might be reconciled with real-world data, and how such models might inform debate facilitation efforts."
],
[
"As discussed in previous sections, news websites like TheGuardian.com establish a communicative settings in which agents (users, commenters) exchange arguments about different issues or topics. For those seeking to establish a healthy debate, it could thus be of interest to know how different users relate to each other in terms of their beliefs about a certain issue or topic (in this case climate change). Which beliefs are for instance shared by users and which ones are not? In other words, can we map patterns of alignment or polarization among users?",
"Figure FIGREF15 ventures to demonstrate how representations of opinion landscapes (generated using the methods outlined above) can be enriched with user information to answer such questions. Specifically, the graph represents the beliefs of two among the most active commenters in the corpus. The opinions of each user are marked using a colour coding scheme: red nodes represent the beliefs of the first user, blue nodes represent the beliefs of the second user. Nodes with a green colour represent beliefs that are shared by both users.",
"Taking into account again the factors of aggregation that were discussed in the previous section, Figure FIGREF15 supports some preliminary observations about the relationship between the two users in terms of their beliefs. Generally, given the fact that the graph concerns the two most active commenters on the website, it can be seen that the rendered opinion landscape is quite extensive. It is also clear that the belief systems of both users are not unrelated, as nodes of all colours can be found distributed throughout the graph. This is especially the case for the right-hand top cluster and right-hand bottom cluster of the graph, where green, red, and blue nodes are mixed. Since both users are discussing on articles on climate change, a degree of affinity between opinions or beliefs is to be expected.",
"Upon closer examination, a number of disparities between the belief systems of the two commenters can be detected. Considering the left-hand top cluster and center of the graph, it becomes clear that exclusively the red commenter is using a selection of terms related to the economical and socio-political realm (e.g. `people', `american', `nation', `government') and industry (e.g. `fuel', `industry', `car', etc.). The blue commenter, on the other hand, exclusively engages in using a range of terms that could be deemed more technical and scientific in nature (e.g. `feedback', `property', `output', `trend', `variability', etc.). From the graph, it also follows that the blue commenter does not enter into the red commenter's `social' segments of the graph as frequently as the red commenter enters the more scientifically-oriented clusters of the graph (although in the latter cases the red commenter does not use the specific technical terminology of the blue commenter). The cluster where both beliefs mingle the most (and where overlap can be observed), is the top right cluster. This overlap is constituted by very general terms (e.g. `climate', `change', and `science'). In sum, the graph reveals that the commenters' beliefs are positioned most closely to each other on the most general aspects of the debate, whereas there is less relatedness on the social and more technical aspects of the debate. In this regard, the depicted situation seemingly evokes currently on-going debates about the role or responsibilities of the people or individuals versus that of experts when it comes to climate change BIBREF35, BIBREF36, BIBREF37.",
"What forms of debate facilitation, then, could be based on these observations? And what kind of collective effects can be expected? As follows from the above, beliefs expressed by the two commenters shown here (which are selected based on their active participation rather than actual engagement or dialogue with one another) are to some extent complementary, as the blue commenter, who displays a scientifically-oriented system of beliefs, does not readily engage with the social topics discussed by the red commenter. As such, the overall opinion landscape of the climate change could potentially be enriched with novel perspectives if the blue commenter was invited to engage in a debate about such topics as industry and government. Similarly, one could explore the possibility of providing explanatory tools or additional references on occasions where the debate takes a more technical turn.",
"However, argument-based models of collective attitude formation BIBREF38, BIBREF34 also tell us to be cautious about such potential interventions. Following the theory underlying these models, different opinion groups prevailing during different periods of a debate will activate different argumentative associations. Facilitating exchange between users with complementary arguments supporting similar opinions may enforce biased argument pools BIBREF39 and lead to increasing polarization at the collective level. In the example considered here the two commenters agree on the general topic, but the analysis suggests that they might have different opinions about the adequate direction of specific climate change action. A more fine–grained automatic detection of cognitive and evaluative associations between arguments and opinions is needed for a reliable use of models to predict what would come out of facilitating exchange between two specific users. In this regard, computational approaches to the linguistic analysis of texts such as semantic frame extraction offer productive opportunities for empirically modelling opinion dynamics. Extraction of causation frames allows one to disentangle cause-effect relations between semantic units, which provides a productive step towards mapping and measuring structures of cognitive associations. These opportunities are to be explored by future work."
],
[
"Ongoing transitions from a print-based media ecology to on-line news and discussion platforms have put traditional forms of news production and consumption at stake. Many challenges related to how information is currently produced and consumed come to a head in news website comment sections, which harbour the potential of providing new insights into how cultural conflicts emerge and evolve. On the basis of an observatory for analyzing climate change-related comments from TheGuardian.com, this article has critically examined possibilities and limitations of the machine-assisted exploration and possible facilitation of on-line opinion dynamics and debates.",
"Beyond technical and modelling pathways, this examination brings into view broader methodological and epistemological aspects of the use of digital methods to capture and study the flow of on-line information and opinions. Notably, the proposed approaches lift questions of computational analysis and interpretation that can be tied to an overarching tension between `distant' and `close reading' BIBREF40. In other words, monitoring on-line opinion dynamics means embracing the challenges and associated trade-offs that come with investigating large quantities of information through computational, text-analytical means, but doing this in such a way that nuance and meaning are not lost in the process.",
"Establishing productive cross-overs between the level of opinions mined at scale (for instance through the lens of causation frames) and the detailed, closer looks at specific conversations, interactions and contexts depends on a series of preliminaries. One of these is the continued availability of high-quality, accessible data. As the current on-line media ecology is recovering from recent privacy-related scandals (e.g. Cambridge Analytica), such data for obvious reasons is not always easy to come by. In the same legal and ethical vein, reproducibility and transparency of models is crucial to the further development of analytical tools and methods. As the experiments discussed in this paper have revealed, a key factor in this undertaking are human faculties of interpretation. Just like the encoding schemes introduced by Axelrod and others before the wide-spread use of computational methods, present-day pipelines and tools foreground the role of human agents as the primary source of meaning attribution.",
"<This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 732942 (Opinion Dynamics and Cultural Conflict in European Spaces – www.Odycceus.eu).>"
]
]
} | {
"question": [
"Does the paper report the results of previous models applied to the same tasks?",
"How is the quality of the discussion evaluated?",
"What is the technique used for text analysis and mining?",
"What are the causal mapping methods employed?"
],
"question_id": [
"5679fabeadf680e35a4f7b092d39e8638dca6b4d",
"a939a53cabb4893b2fd82996f3dbe8688fdb7bbb",
"8b99767620fd4efe51428b68841cc3ec06699280",
"312417675b3dc431eb7e7b16a917b7fed98d4376"
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"Climate Change",
"Climate Change",
"Climate Change",
"Climate Change"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"Technical and theoretical questions related to the proposed method and infrastructure for the exploration and facilitation of debates will be discussed in three sections. The first section concerns notions of how to define what constitutes a belief or opinion and how these can be mined from texts. To this end, an approach based on the automated extraction of semantic frames expressing causation is proposed. The observatory thus builds on the theoretical premise that expressions of causation such as `global warming causes rises in sea levels' can be revelatory for a person or group's underlying belief systems. Through a further technical description of the observatory's data-analytical components, section two of the paper deals with matters of spatially modelling the output of the semantic frame extractor and how this might be achieved without sacrificing nuances of meaning. The final section of the paper, then, discusses how insights gained from technologically observing opinion dynamics can inform conceptual modelling efforts and approaches to on-line opinion facilitation. As such, the paper brings into view and critically evaluates the fundamental conceptual leap from machine-guided observation to debate facilitation and intervention."
],
"highlighted_evidence": [
"The final section of the paper, then, discusses how insights gained from technologically observing opinion dynamics can inform conceptual modelling efforts and approaches to on-line opinion facilitation. As such, the paper brings into view and critically evaluates the fundamental conceptual leap from machine-guided observation to debate facilitation and intervention."
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"073dd7c577c394d09c8662b55bea5245045b8fcf",
"6f23f1214b098b41370b2fb5264d90c461176996"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"0e9debd3ce939c32cc6cef2943e255fae4ef0068"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"bcd086f2b57e2f726160d06bdf092b4b95d38df0"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Axelrod's causal mapping method"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Axelrod's causal mapping method comprises a set of conventions to graphically represent networks of causes and effects (the nodes in a network) as well as the qualitative aspects of this relation (the network’s directed edges, notably assertions of whether the causal linkage is positive or negative). These causes and effects are to be extracted from relevant sources by means of a series of heuristics and an encoding scheme (it should be noted that for this task Axelrod had human readers in mind). The graphs resulting from these efforts provide a structural overview of the relations among causal assertions (and thus beliefs):"
],
"highlighted_evidence": [
"Axelrod's causal mapping method comprises a set of conventions to graphically represent networks of causes and effects (the nodes in a network) as well as the qualitative aspects of this relation (the network’s directed edges, notably assertions of whether the causal linkage is positive or negative)."
]
}
],
"annotation_id": [
"8dbf4bcb59862df22bdcfd37c237c9ffc86e829d"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Figure 1. Communicative setting of many online newspaper sites. The newspaper publishes articles on different topics and users can comment on these articles and previous comments.",
"Figure 2. This is a global representation of the data produced by considering a 10 percent subsample of all the causes identified by the causation tracker on the set of comments. It treats statements as nodes of a network and two statements are linked if they share the same lemma (the number of shared lemmata corresponds to the link weight). In this analysis, only nouns, verbs and adjectives are considered (the text processing is done with spaCy (Honnibal and Montani 2019)). For this global view, each cause statement is labeled by that word within the statement that is most frequent in all the data. The visual output was created using the network exploration tool Gephi (0.92). The 2D layout is the result of the OpenOrd layout algorithm integrated in Gephi followed by the label adjustment tool to avoid too much overlap of labels.",
"Figure 3. A detailed representation of effect statements associated with nuclear power. Clusters concern potential extinction or deaths, notions of cleanliness and pollution, and the reduction of CO2 levels in different countries. Labels represent the full output of the semantic frame extractor.",
"Figure 4. A detailed representation of the effects of global warming. This graph conveys the diversity of opinions, as well as emerging patterns. It can for instance be observed that certain opinions are clustered around the idea of ‘increases’, notably in terms of evaporation, drought, heat waves, intensity of cyclones and storms, etc. An adjacent cluster is related to ‘extremes’, such as extreme summers and weather events, but also extreme colds. Labels represent the full output of the semantic frame extractor.",
"Figure 5. A representation of the opinion landscapes of two active commenters on TheGuardian.com. Statements by the first commenter are marked with a blue colour, opinions by the second commenter with a red colour. Overlapping statements are marked in green. The graph reveals that the commenters’ beliefs are positioned most closely to each other on the most general aspects of the debate, whereas there is less relatedness on the social and more technical aspects of the discussion."
],
"file": [
"3-Figure1-1.png",
"7-Figure2-1.png",
"8-Figure3-1.png",
"9-Figure4-1.png",
"10-Figure5-1.png"
]
} |
1912.13109 | "Hinglish"Language -- Modeling a Messy Code-Mixed Language | With a sharp rise in fluency and users of "Hinglish" in linguistically diverse country, India, it has increasingly become important to analyze social content written in this language in platforms such as Twitter, Reddit, Facebook. This project focuses on using deep learning techniques to tackle a classification problem in categorizing social content written in Hindi-English into Abusive, Hate-Inducing and Not offensive categories. We utilize bi-directional sequence models with easy text augmentation techniques such as synonym replacement, random insertion, random swap, and random deletion to produce a state of the art classifier that outperforms the previous work done on analyzing this dataset. | {
"section_name": [
"Introduction",
"Introduction ::: Modeling challenges",
"Related Work ::: Transfer learning based approaches",
"Related Work ::: Hybrid models",
"Dataset and Features",
"Dataset and Features ::: Challenges",
"Model Architecture",
"Model Architecture ::: Loss function",
"Model Architecture ::: Models",
"Model Architecture ::: Hyper parameters",
"Results",
"Conclusion and Future work",
"References"
],
"paragraphs": [
[
"Hinglish is a linguistic blend of Hindi (very widely spoken language in India) and English (an associate language of urban areas) and is spoken by upwards of 350 million people in India. While the name is based on the Hindi language, it does not refer exclusively to Hindi, but is used in India, with English words blending with Punjabi, Gujarati, Marathi and Hindi. Sometimes, though rarely, Hinglish is used to refer to Hindi written in English script and mixing with English words or phrases. This makes analyzing the language very interesting. Its rampant usage in social media like Twitter, Facebook, Online blogs and reviews has also led to its usage in delivering hate and abuses in similar platforms. We aim to find such content in the social media focusing on the tweets. Hypothetically, if we can classify such tweets, we might be able to detect them and isolate them for further analysis before it reaches public. This will a great application of AI to the social cause and thus is motivating. An example of a simple, non offensive message written in Hinglish could be:",
"\"Why do you waste your time with <redacted content>. Aapna ghar sambhalta nahi(<redacted content>). Chale dusro ko basane..!!\"",
"The second part of the above sentence is written in Hindi while the first part is in English. Second part calls for an action to a person to bring order to his/her home before trying to settle others."
],
[
"From the modeling perspective there are couple of challenges introduced by the language and the labelled dataset. Generally, Hinglish follows largely fuzzy set of rules which evolves and is dependent upon the users preference. It doesn't have any formal definitions and thus the rules of usage are ambiguous. Thus, when used by different users the text produced may differ. Overall the challenges posed by this problem are:",
"Geographical variation: Depending upon the geography of origination, the content may be be highly influenced by the underlying region.",
"Language and phonetics variation: Based on a census in 2001, India has 122 major languages and 1599 other languages. The use of Hindi and English in a code switched setting is highly influenced by these language.",
"No grammar rules: Hinglish has no fixed set of grammar rules. The rules are inspired from both Hindi and English and when mixed with slur and slang produce large variation.",
"Spelling variation: There is no agreement on the spellings of the words which are mixed with English. For example to express love, a code mixed spelling, specially when used social platforms might be pyaar, pyar or pyr.",
"Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295. Prior work addresses this concern by using Transfer Learning on an architecture learnt on about 14,500 messages with an accuracy of 83.90. We addressed this concern using data augmentation techniques applied on text data."
],
[
"Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.",
"The approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work."
],
[
"In another localized setting of Vietnamese language, Nguyen et al. in 2017 proposed a Hybrid multi-channel CNN and LSTM model where they build feature maps for Vietnamese language using CNN to capture shorterm dependencies and LSTM to capture long term dependencies and concatenate both these feature sets to learn a unified set of features on the messages. These concatenated feature vectors are then sent to a few fully connected layers. They achieved an accuracy rate of 87.3% with this architecture."
],
[
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:"
],
[
"The obtained data set had many challenges and thus a data preparation task was employed to clean the data and make it ready for the deep learning pipeline. The challenges and processes that were applied are stated below:",
"Messy text messages: The tweets had urls, punctuations, username mentions, hastags, emoticons, numbers and lots of special characters. These were all cleaned up in a preprocessing cycle to clean the data.",
"Stop words: Stop words corpus obtained from NLTK was used to eliminate most unproductive words which provide little information about individual tweets.",
"Transliteration: Followed by above two processes, we translated Hinglish tweets into English words using a two phase process",
"Transliteration: In phase I, we used translation API's provided by Google translation services and exposed via a SDK, to transliteration the Hinglish messages to English messages.",
"Translation: After transliteration, words that were specific to Hinglish were translated to English using an Hinglish-English dictionary. By doing this we converted the Hinglish message to and assortment of isolated words being presented in the message in a sequence that can also be represented using word to vector representation.",
"Data augmentation: Given the data set was very small with a high degree of imbalance in the labelled messages for three different classes, we employed a data augmentation technique to boost the learning of the deep network. Following techniques from the paper by Jason et al. was utilized in this setting that really helped during the training phase.Thsi techniques wasnt used in previous studies. The techniques were:",
"Synonym Replacement (SR):Randomly choose n words from the sentence that are not stop words. Replace each of these words with one of its synonyms chosen at random.",
"Random Insertion (RI):Find a random synonym of a random word in the sentence that is not a stop word. Insert that synonym into a random position in the sentence. Do this n times.",
"Random Swap (RS):Randomly choose two words in the sentence and swap their positions. Do this n times.",
"Random Deletion (RD):For each word in the sentence, randomly remove it with probability p.",
"Word Representation: We used word embedding representations by Glove for creating word embedding layers and to obtain the word sequence vector representations of the processed tweets. The pre-trained embedding dimension were one of the hyperparamaters for model. Further more, we introduced another bit flag hyperparameter that determined if to freeze these learnt embedding.",
"Train-test split: The labelled dataset that was available for this task was very limited in number of examples and thus as noted above few data augmentation techniques were applied to boost the learning of the network. Before applying augmentation, a train-test split of 78%-22% was done from the original, cleansed data set. Thus, 700 tweets/messages were held out for testing. All model evaluation were done in on the test set that got generated by this process. The results presented in this report are based on the performance of the model on the test set. The training set of 2489 messages were however sent to an offline pipeline for augmenting the data. The resulting training dataset was thus 7934 messages. the final distribution of messages for training and test was thus below:"
],
[
"We tested the performance of various model architectures by running our experiment over 100 times on a CPU based compute which later as migrated to GPU based compute to overcome the slow learning progress. Our universal metric for minimizing was the validation loss and we employed various operational techniques for optimizing on the learning process. These processes and its implementation details will be discussed later but they were learning rate decay, early stopping, model checkpointing and reducing learning rate on plateau."
],
[
"For the loss function we chose categorical cross entropy loss in finding the most optimal weights/parameters of the model. Formally this loss function for the model is defined as below:",
"The double sum is over the number of observations and the categories respectively. While the model probability is the probability that the observation i belongs to category c."
],
[
"Among the model architectures we experimented with and without data augmentation were:",
"Fully Connected dense networks: Model hyperparameters were inspired from the previous work done by Vo et al and Mathur et al. This was also used as a baseline model but we did not get appreciable performance on such architecture due to FC networks not being able to capture local and long term dependencies.",
"Convolution based architectures: Architecture and hyperparameter choices were chosen from the past study Deon on the subject. We were able to boost the performance as compared to only FC based network but we noticed better performance from architectures that are suitable to sequences such as text messages or any timeseries data.",
"Sequence models: We used SimpleRNN, LSTM, GRU, Bidirectional LSTM model architecture to capture long term dependencies of the messages in determining the class the message or the tweet belonged to.",
"Based on all the experiments we conducted below model had best performance related to metrics - Recall rate, F1 score and Overall accuracy."
],
[
"Choice of model parameters were in the above models were inspired from previous work done but then were tuned to the best performance of the Test dataset. Following parameters were considered for tuning.",
"Learning rate: Based on grid search the best performance was achieved when learning rate was set to 0.01. This value was arrived by a grid search on lr parameter.",
"Number of Bidirectional LSTM units: A set of 32, 64, 128 hidden activation units were considered for tuning the model. 128 was a choice made by Vo et al in modeling for Vietnamese language but with our experiments and with a small dataset to avoid overfitting to train dataset, a smaller unit sizes were considered.",
"Embedding dimension: 50, 100 and 200 dimension word representation from Glove word embedding were considered and the best results were obtained with 100d representation, consistent with choices made in the previous work.",
"Transfer learning on Embedding; Another bit flag for training the embedding on the train data or freezing the embedding from Glove was used. It was determined that set of pre-trained weights from Glove was best when it was fine tuned with Hinglish data. It provides evidence that a separate word or sentence level embedding when learnt for Hinglish text analysis will be very useful.",
"Number of dense FC layers.",
"Maximum length of the sequence to be considered: The max length of tweets/message in the dataset was 1265 while average was 116. We determined that choosing 200 resulted in the best performance."
],
[
"During our experimentation, it was evident that this is a hard problem especially detecting the hate speech, text in a code- mixed language. The best recall rate of 77 % for hate speech was obtained by a Bidirectional LSTM with 32 units with a recurrent drop out rate of 0.2. Precision wise GRU type of RNN sequence model faired better than other kinds for hate speech detection. On the other hand for detecting offensive and non offensive tweets, fairly satisfactory results were obtained. For offensive tweets, 92 % precision was and recall rate of 88% was obtained with GRU versus BiLSTM based models. Comparatively, Recall of 85 % and precision of 76 % was obtained by again GRU and BiLSTM based models as shown and marked in the results."
],
[
"The results of the experiments are encouraging on detective offensive vs non offensive tweets and messages written in Hinglish in social media. The utilization of data augmentation technique in this classification task was one of the vital contributions which led us to surpass results obtained by previous state of the art Hybrid CNN-LSTM based models. However, the results of the model for predicting hateful tweets on the contrary brings forth some shortcomings of the model. The biggest shortcoming on the model based on error analysis indicates less than generalized examples presented by the dataset. We also note that the embedding learnt from the Hinglish data set may be lacking and require extensive training to have competent word representations of Hinglish text. Given this learning's, we identify that creating word embeddings on much larger Hinglish corpora may have significant results. We also hypothesize that considering alternate methods than translation and transliteration may prove beneficial."
],
[
"[1] Mathur, Puneet and Sawhney, Ramit and Ayyar, Meghna and Shah, Rajiv, Did you offend me? classification of offensive tweets in hinglish language, Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)",
"[2] Mathur, Puneet and Shah, Rajiv and Sawhney, Ramit and Mahata, Debanjan Detecting offensive tweets in hindi-english code-switched language Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media",
"[3] Vo, Quan-Hoang and Nguyen, Huy-Tien and Le, Bac and Nguyen, Minh-Le Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)",
"[4] Hochreiter, Sepp and Schmidhuber, Jürgen Long short-term memory Neural computation 1997",
"[5] Sinha, R Mahesh K and Thakur, Anil Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)",
"[6] Pennington, Jeffrey and Socher, Richard and Manning, Christopher Glove: Global vectors for word representation Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"[7] Zhang, Lei and Wang, Shuai and Liu, Bing Deep learning for sentiment analysis: A survey Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery",
"[8] Caruana, Rich and Lawrence, Steve and Giles, C Lee Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping Advances in neural information processing systems",
"[9] Beale, Mark Hudson and Hagan, Martin T and Demuth, Howard B Neural network toolbox user’s guide The MathWorks Incs",
"[10] Chollet, François and others Keras: The python deep learning library Astrophysics Source Code Library",
"[11] Wei, Jason and Zou, Kai EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)"
]
]
} | {
"question": [
"What is the previous work's model?",
"What dataset is used?",
"How big is the dataset?",
"How is the dataset collected?",
"Was each text augmentation technique experimented individually?",
"What models do previous work use?",
"Does the dataset contain content from various social media platforms?",
"What dataset is used?"
],
"question_id": [
"792d7b579cbf7bfad8fe125b0d66c2059a174cf9",
"44a2a8e187f8adbd7d63a51cd2f9d2d324d0c98d",
"5908d7fb6c48f975c5dfc5b19bb0765581df2b25",
"cca3301f20db16f82b5d65a102436bebc88a2026",
"cfd67b9eeb10e5ad028097d192475d21d0b6845b",
"e1c681280b5667671c7f78b1579d0069cba72b0e",
"58d50567df71fa6c3792a0964160af390556757d",
"07c79edd4c29635dbc1c2c32b8df68193b7701c6"
],
"nlp_background": [
"two",
"two",
"two",
"two",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Ternary Trans-CNN"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.",
"The approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work."
],
"highlighted_evidence": [
"Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.\n\nThe approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work."
]
}
],
"annotation_id": [
"7011aa54bc26a8fc6341a2dcdb252137b10afb54"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"HEOT ",
"A labelled dataset for a corresponding english tweets"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:"
],
"highlighted_evidence": [
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al."
]
},
{
"unanswerable": false,
"extractive_spans": [
"HEOT"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:"
],
"highlighted_evidence": [
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small."
]
}
],
"annotation_id": [
"115ade40d6ac911be5ffa8d7d732c22c6e822f35",
"bf1ad37030290082d5397af72edc7a56f648141e"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"3189 rows of text messages"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295. Prior work addresses this concern by using Transfer Learning on an architecture learnt on about 14,500 messages with an accuracy of 83.90. We addressed this concern using data augmentation techniques applied on text data."
],
"highlighted_evidence": [
"Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295."
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Resulting dataset was 7934 messages for train and 700 messages for test.",
"evidence": [
"Train-test split: The labelled dataset that was available for this task was very limited in number of examples and thus as noted above few data augmentation techniques were applied to boost the learning of the network. Before applying augmentation, a train-test split of 78%-22% was done from the original, cleansed data set. Thus, 700 tweets/messages were held out for testing. All model evaluation were done in on the test set that got generated by this process. The results presented in this report are based on the performance of the model on the test set. The training set of 2489 messages were however sent to an offline pipeline for augmenting the data. The resulting training dataset was thus 7934 messages. the final distribution of messages for training and test was thus below:",
"FLOAT SELECTED: Table 3: Train-test split"
],
"highlighted_evidence": [
"The resulting training dataset was thus 7934 messages. the final distribution of messages for training and test was thus below:",
"FLOAT SELECTED: Table 3: Train-test split"
]
}
],
"annotation_id": [
"bf818377320e6257fc663920044efc482d2d8fb3",
"e510717bb49fa0f0e8faff405e1ce3bbbde46c6a"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al",
"HEOT obtained from one of the past studies done by Mathur et al"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:"
],
"highlighted_evidence": [
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al."
]
}
],
"annotation_id": [
"fdb7b5252df7cb221bb9b696fddcc5e070453392"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"2f35875b3d410f546700ef96c2c2926092dbb5b0"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Ternary Trans-CNN ",
"Hybrid multi-channel CNN and LSTM"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Related Work ::: Transfer learning based approaches",
"Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.",
"The approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work.",
"Related Work ::: Hybrid models",
"In another localized setting of Vietnamese language, Nguyen et al. in 2017 proposed a Hybrid multi-channel CNN and LSTM model where they build feature maps for Vietnamese language using CNN to capture shorterm dependencies and LSTM to capture long term dependencies and concatenate both these feature sets to learn a unified set of features on the messages. These concatenated feature vectors are then sent to a few fully connected layers. They achieved an accuracy rate of 87.3% with this architecture."
],
"highlighted_evidence": [
" Transfer learning based approaches\nMathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.\n\nThe approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work.\n\nRelated Work ::: Hybrid models\nIn another localized setting of Vietnamese language, Nguyen et al. in 2017 proposed a Hybrid multi-channel CNN and LSTM model where they build feature maps for Vietnamese language using CNN to capture shorterm dependencies and LSTM to capture long term dependencies and concatenate both these feature sets to learn a unified set of features on the messages. These concatenated feature vectors are then sent to a few fully connected layers. They achieved an accuracy rate of 87.3% with this architecture."
]
}
],
"annotation_id": [
"a4a998955b75604e43627ad8b411e15dfa039b88"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"Hinglish is a linguistic blend of Hindi (very widely spoken language in India) and English (an associate language of urban areas) and is spoken by upwards of 350 million people in India. While the name is based on the Hindi language, it does not refer exclusively to Hindi, but is used in India, with English words blending with Punjabi, Gujarati, Marathi and Hindi. Sometimes, though rarely, Hinglish is used to refer to Hindi written in English script and mixing with English words or phrases. This makes analyzing the language very interesting. Its rampant usage in social media like Twitter, Facebook, Online blogs and reviews has also led to its usage in delivering hate and abuses in similar platforms. We aim to find such content in the social media focusing on the tweets. Hypothetically, if we can classify such tweets, we might be able to detect them and isolate them for further analysis before it reaches public. This will a great application of AI to the social cause and thus is motivating. An example of a simple, non offensive message written in Hinglish could be:"
],
"highlighted_evidence": [
"We aim to find such content in the social media focusing on the tweets."
]
}
],
"annotation_id": [
"ec3208718af7624c0ffd8c9ec5d9f4d04217b9ab"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"HEOT ",
"A labelled dataset for a corresponding english tweets "
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:"
],
"highlighted_evidence": [
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al."
]
}
],
"annotation_id": [
"bd8f9113da801bf11685ae686a6e0ca758f17b83"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
} | {
"caption": [
"Table 1: Annotated Data set",
"Table 2: Examples in the dataset",
"Table 3: Train-test split",
"Figure 1: Deep learning network used for the modeling",
"Figure 2: Results of various experiments"
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png",
"5-Figure1-1.png",
"5-Figure2-1.png"
]
} |
1911.03310 | How Language-Neutral is Multilingual BERT? | Multilingual BERT (mBERT) provides sentence representations for 104 languages, which are useful for many multi-lingual tasks. Previous work probed the cross-linguality of mBERT using zero-shot transfer learning on morphological and syntactic tasks. We instead focus on the semantic properties of mBERT. We show that mBERT representations can be split into a language-specific component and a language-neutral component, and that the language-neutral component is sufficiently general in terms of modeling semantics to allow high-accuracy word-alignment and sentence retrieval but is not yet good enough for the more difficult task of MT quality estimation. Our work presents interesting challenges which must be solved to build better language-neutral representations, particularly for tasks requiring linguistic transfer of semantics. | {
"section_name": [
"Introduction",
"Related Work",
"Centering mBERT Representations",
"Probing Tasks",
"Probing Tasks ::: Language Identification.",
"Probing Tasks ::: Language Similarity.",
"Probing Tasks ::: Parallel Sentence Retrieval.",
"Probing Tasks ::: Word Alignment.",
"Probing Tasks ::: MT Quality Estimation.",
"Experimental Setup",
"Results ::: Language Identification.",
"Results ::: Language Similarity.",
"Results ::: Parallel Sentence Retrieval.",
"Results ::: Word Alignment.",
"Results ::: MT Quality Estimation.",
"Fine-tuning mBERT",
"Fine-tuning mBERT ::: UDify",
"Fine-tuning mBERT ::: lng-free",
"Conclusions"
],
"paragraphs": [
[
"Multilingual BERT (mBERT; BIBREF0) is gaining popularity as a contextual representation for various multilingual tasks, such as dependency parsing BIBREF1, BIBREF2, cross-lingual natural language inference (XNLI) or named-entity recognition (NER) BIBREF3, BIBREF4, BIBREF5.",
"BIBREF3 present an exploratory paper showing that mBERT can be used cross-lingually for zero-shot transfer in morphological and syntactic tasks, at least for typologically similar languages. They also study an interesting semantic task, sentence-retrieval, with promising initial results. Their work leaves many open questions in terms of how good the cross-lingual mBERT representation is for semantics, motivating our work.",
"In this paper, we directly assess the semantic cross-lingual properties of mBERT. To avoid methodological issues with zero-shot transfer (possible language overfitting, hyper-parameter tuning), we selected tasks that only involve a direct comparison of the representations: cross-lingual sentence retrieval, word alignment, and machine translation quality estimation (MT QE). Additionally, we explore how the language is represented in the embeddings by training language identification classifiers and assessing how the representation similarity corresponds to phylogenetic language families.",
"Our results show that the mBERT representations, even after language-agnostic fine-tuning, are not very language-neutral. However, the identity of the language can be approximated as a constant shift in the representation space. An even higher language-neutrality can still be achieved by a linear projection fitted on a small amount of parallel data.",
"Finally, we present attempts to strengthen the language-neutral component via fine-tuning: first, for multi-lingual syntactic and morphological analysis; second, towards language identity removal via a adversarial classifier."
],
[
"Since the publication of mBERT BIBREF0, many positive experimental results were published.",
"BIBREF2 reached impressive results in zero-shot dependency parsing. However, the representation used for the parser was a bilingual projection of the contextual embeddings based on word-alignment trained on parallel data.",
"BIBREF3 recently examined the cross-lingual properties of mBERT on zero-shot NER and part-of-speech (POS) tagging but the success of zero-shot transfer strongly depends on how typologically similar the languages are. Similarly, BIBREF4 trained good multilingual models for POS tagging, NER, and XNLI, but struggled to achieve good results in the zero-shot setup.",
"BIBREF3 assessed mBERT on cross-lingual sentence retrieval between three language pairs. They observed that if they subtract the average difference between the embeddings from the target language representation, the retrieval accuracy significantly increases. We systematically study this idea in the later sections.",
"Many experiments show BIBREF4, BIBREF5, BIBREF1 that downstream task models can extract relevant features from the multilingual representations. But these results do not directly show language-neutrality, i.e., to what extent are similar phenomena are represented similarly across languages. The models can obtain the task-specific information based on the knowledge of the language, which (as we show later) can be easily identified. Our choice of evaluation tasks eliminates this risk by directly comparing the representations. Limited success in zero-shot setups and the need for explicit bilingual projection in order to work well BIBREF3, BIBREF4, BIBREF6 also shows limited language neutrality of mBERT."
],
[
"Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specific component is similar across all sentences in the language.",
"We thus try to remove the language-specific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space. We do this by estimating the language centroid as the mean of the mBERT representations for a set of sentences in that language and subtracting the language centroid from the contextual embeddings.",
"We then analyze the semantic properties of both the original and the centered representations using a range of probing tasks. For all tasks, we test all layers of the model. For tasks utilizing a single-vector sentence representation, we test both the vector corresponding to the [cls] token and mean-pooled states."
],
[
"We employ five probing tasks to evaluate the language neutrality of the representations."
],
[
"With a representation that captures all phenomena in a language-neutral way, it should be difficult to determine what language the sentence is written in. Unlike other tasks, language identification does require fitting a classifier. We train a linear classifier on top of a sentence representation to try to classify the language of the sentence."
],
[
"Experiments with POS tagging BIBREF3 suggest that similar languages tend to get similar representations on average. We quantify that observation by measuring how languages tend to cluster by the language families using V-measure over hierarchical clustering of the language centeroid BIBREF7."
],
[
"For each sentence in a multi-parallel corpus, we compute the cosine distance of its representation with representations of all sentences on the parallel side of the corpus and select the sentence with the smallest distance.",
"Besides the plain and centered [cls] and mean-pooled representations, we evaluate explicit projection into the “English space”. For each language, we fit a linear regression projecting the representations into English representation space using a small set of parallel sentences."
],
[
"While sentence retrieval could be done with keyword spotting, computing bilingual alignment requires resolving detailed correspondence on the word level.",
"We find the word alignment as a minimum weighted edge cover of a bipartite graph. The graph connects the tokens of the sentences in the two languages and edges between them are weighted with the cosine distance of the token representation. Tokens that get split into multiple subwords are represented using the average of the embeddings of the subwords. Note that this algorithm is invariant to representation centering which would only change the edge weights by a constant offset.",
"We evaluate the alignment using the F$_1$ score over both sure and possible alignment links in a manually aligned gold standard."
],
[
"MT QE assesses the quality of an MT system output without having access to a reference translation.",
"The standard evaluation metric is the correlation with the Human-targeted Translation Error Rate which is the number of edit operations a human translator would need to do to correct the system output. This is a more challenging task than the two previous ones because it requires capturing more fine-grained differences in meaning.",
"We evaluate how cosine distance of the representation of the source sentence and of the MT output reflects the translation quality. In addition to plain and centered representations, we also test trained bilingual projection, and a fully supervised regression trained on training data."
],
[
"We use a pre-trained mBERT model that was made public with the BERT release. The model dimension is 768, hidden layer dimension 3072, self-attention uses 12 heads, the model has 12 layers. It uses a vocabulary of 120k wordpieces that is shared for all languages.",
"To train the language identification classifier, for each of the BERT languages we randomly selected 110k sentences of at least 20 characters from Wikipedia, and keep 5k for validation and 5k for testing for each language. The training data are also used for estimating the language centroids.",
"For parallel sentence retrieval, we use a multi-parallel corpus of test data from the WMT14 evaluation campaign BIBREF8 with 3,000 sentences in Czech, English, French, German, Hindi, and Russian. The linear projection experiment uses the WMT14 development data.",
"We use manually annotated word alignment datasets to evaluate word alignment between English on one side and Czech (2.5k sent.; BIBREF9), Swedish (192 sent.; BIBREF10), German (508 sent.), French (447 sent.; BIBREF11) and Romanian (248 sent.; BIBREF12) on the other side. We compare the results with FastAlign BIBREF13 that was provided with 1M additional parallel sentences from ParaCrawl BIBREF14 in addition to the test data.",
"For MT QE, we use English-German data provided for the WMT19 QE Shared Task BIBREF15 consisting training and test data with source senteces, their automatic translations, and manually corrections."
],
[
"Table TABREF7 shows that centering the sentence representations considerably decreases the accuracy of language identification, especially in the case of mean-pooled embeddings. This indicates that the proposed centering procedure does indeed remove the language-specific information to a great extent."
],
[
"Figure FIGREF9 is a tSNE plot BIBREF16 of the language centroids, showing that the similarity of the centroids tends to correspond to the similarity of the languages. Table TABREF10 confirms that the hierarchical clustering of the language centroids mostly corresponds to the language families."
],
[
"Results in Table TABREF12 reveal that the representation centering dramatically improves the retrieval accuracy, showing that it makes the representations more language-neutral. However, an explicitly learned projection of the representations leads to a much greater improvement, reaching a close-to-perfect accuracy, even though the projection was fitted on relatively small parallel data. The accuracy is higher for mean-pooled states than for the [cls] embedding and varies according to the layer of mBERT used (see Figure FIGREF13)."
],
[
"Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance."
],
[
"Qualitative results of MT QE are tabulated in Table TABREF18. Unlike sentence retrieval, QE is more sensitive to subtle differences between sentences. Measuring the distance of the non-centered sentence vectors does not correlate with translation quality at all. Centering or explicit projection only leads to a mild correlation, much lower than a supervisedly trained regression;and even better performance is possible BIBREF15. The results show that the linear projection between the representations only captures a rough semantic correspondence, which does not seem to be sufficient for QE, where the most indicative feature appears to be sentence complexity."
],
[
"We also considered model fine-tuning towards stronger language neutrality. We evaluate two fine-tuned versions of mBERT: UDify, tuned for a multi-lingual dependency parser, and lng-free, tuned to jettison the language-specific information from the representations."
],
[
"The UDify model BIBREF1 uses mBERT to train a single model for dependency parsing and morphological analysis of 75 languages. During the parser training, mBERT is fine-tuned, which improves the parser accuracy. Results on zero-shot parsing suggest that the fine-tuning leads to more cross-lingual representations with respect to morphology and syntax.",
"However, our analyses show that fine-tuning mBERT for multilingual dependency parsing does not remove the language identity information from the representations and actually makes the representations less semantically cross-lingual."
],
[
"In this experiment, we try to make the representations more language-neutral by removing the language identity from the model using an adversarial approach. We continue training mBERT in a multi-task learning setup with the masked LM objective with the same sampling procedure BIBREF0 jointly with adversarial language ID classifiers BIBREF17. For each layer, we train one classifier for the [cls] token and one for the mean-pooled hidden states with the gradient reversal layer BIBREF18 between mBERT and the classifier.",
"The results reveal that the adversarial removal of language information succeeds in dramatically decreasing the accuracy of the language identification classifier; the effect is strongest in deeper layers for which the standard mBERT tend to perform better (see Figure FIGREF22). However, other tasksare not affected by the adversarial fine-tuning."
],
[
"Using a set of semantically oriented tasks that require explicit semantic cross-lingual representations, we showed that mBERT contextual embeddings do not represent similar semantic phenomena similarly and therefore they are not directly usable for zero-shot cross-lingual tasks.",
"Contextual embeddings of mBERT capture similarities between languages and cluster the languages by their families. Neither cross-lingual fine-tuning nor adversarial language identity removal breaks this property. A part of language information is encoded by the position in the embedding space, thus a certain degree of cross-linguality can be achieved by centering the representations for each language. Exploiting this property allows a good cross-lingual sentence retrieval performance and bilingual word alignment (which is invariant to the shift). A good cross-lingual representation can be achieved by fitting a supervised projection on a small parallel corpus."
]
]
} | {
"question": [
"How they demonstrate that language-neutral component is sufficiently general in terms of modeling semantics to allow high-accuracy word-alignment?",
"Are language-specific and language-neutral components disjunctive?",
"How they show that mBERT representations can be split into a language-specific component and a language-neutral component?",
"What challenges this work presents that must be solved to build better language-neutral representations?"
],
"question_id": [
"66125cfdf11d3bf8e59728428e02021177142c3a",
"222b2469eede9a0448e0226c6c742e8c91522af3",
"6f8386ad64dce3a20bc75165c5c7591df8f419cf",
"81dc39ee6cdacf90d5f0f62134bf390a29146c65"
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specific component is similar across all sentences in the language.",
"We use a pre-trained mBERT model that was made public with the BERT release. The model dimension is 768, hidden layer dimension 3072, self-attention uses 12 heads, the model has 12 layers. It uses a vocabulary of 120k wordpieces that is shared for all languages.",
"To train the language identification classifier, for each of the BERT languages we randomly selected 110k sentences of at least 20 characters from Wikipedia, and keep 5k for validation and 5k for testing for each language. The training data are also used for estimating the language centroids.",
"Results ::: Word Alignment.",
"Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance.",
"FLOAT SELECTED: Table 4: Maximum F1 score for word alignment across layers compared with FastAlign baseline."
],
"highlighted_evidence": [
"Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specific component is similar across all sentences in the language.",
"We use a pre-trained mBERT model that was made public with the BERT release. The model dimension is 768, hidden layer dimension 3072, self-attention uses 12 heads, the model has 12 layers. It uses a vocabulary of 120k wordpieces that is shared for all languages.\n\nTo train the language identification classifier, for each of the BERT languages we randomly selected 110k sentences of at least 20 characters from Wikipedia, and keep 5k for validation and 5k for testing for each language. The training data are also used for estimating the language centroids.",
"Results ::: Word Alignment.\nTable TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance.",
"FLOAT SELECTED: Table 4: Maximum F1 score for word alignment across layers compared with FastAlign baseline."
]
},
{
"unanswerable": false,
"extractive_spans": [
"explicit projection had a negligible effect on the performance"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance."
],
"highlighted_evidence": [
"Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance."
]
}
],
"annotation_id": [
"1187ecb05a17aa786395c74af2f50bd6f0eb126e",
"2c3458b0bced1480d8a940f3b5f2908a38df0ce0"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"We thus try to remove the language-specific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space. We do this by estimating the language centroid as the mean of the mBERT representations for a set of sentences in that language and subtracting the language centroid from the contextual embeddings."
],
"highlighted_evidence": [
"We thus try to remove the language-specific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space."
]
}
],
"annotation_id": [
"61674e606b053abc2faf06cb19a69c0cbcbb2a38"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"We thus try to remove the language-specific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specific component is similar across all sentences in the language.",
"We thus try to remove the language-specific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space. We do this by estimating the language centroid as the mean of the mBERT representations for a set of sentences in that language and subtracting the language centroid from the contextual embeddings.",
"We then analyze the semantic properties of both the original and the centered representations using a range of probing tasks. For all tasks, we test all layers of the model. For tasks utilizing a single-vector sentence representation, we test both the vector corresponding to the [cls] token and mean-pooled states."
],
"highlighted_evidence": [
"Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specific component is similar across all sentences in the language.\n\nWe thus try to remove the language-specific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space.",
"We then analyze the semantic properties of both the original and the centered representations using a range of probing tasks."
]
}
],
"annotation_id": [
"17c088f205d363ef5f3af7c50126078b13e0a3a0"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"contextual embeddings do not represent similar semantic phenomena similarly and therefore they are not directly usable for zero-shot cross-lingual tasks"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Using a set of semantically oriented tasks that require explicit semantic cross-lingual representations, we showed that mBERT contextual embeddings do not represent similar semantic phenomena similarly and therefore they are not directly usable for zero-shot cross-lingual tasks."
],
"highlighted_evidence": [
"Using a set of semantically oriented tasks that require explicit semantic cross-lingual representations, we showed that mBERT contextual embeddings do not represent similar semantic phenomena similarly and therefore they are not directly usable for zero-shot cross-lingual tasks."
]
}
],
"annotation_id": [
"b2f216dee55d7e5f0dd7470664558c84f0ae6fcc"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Table 1: Accuracy of language identification, values from the best-scoring layers.",
"Figure 1: Language centroids of the mean-pooled representations from the 8th layer of cased mBERT on a tSNE plot with highlighted language families.",
"Table 2: V-Measure for hierarchical clustering of language centroids and grouping languages into genealogical families for families with at least three languages covered by mBERT.",
"Figure 2: Accuracy of sentence retrieval for meanpooled contextual embeddings from BERT layers.",
"Table 3: Average accuracy for sentence retrieval over all 30 language pairs.",
"Table 4: Maximum F1 score for word alignment across layers compared with FastAlign baseline.",
"Table 5: Correlation of estimated T quality with HTER for English-to-German translation on T19 data.",
"Figure 3: Language ID accuracy for different layers of mBERT."
],
"file": [
"3-Table1-1.png",
"3-Figure1-1.png",
"4-Table2-1.png",
"4-Figure2-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"5-Table5-1.png",
"5-Figure3-1.png"
]
} |
1907.12108 | CAiRE: An End-to-End Empathetic Chatbot | In this paper, we present an end-to-end empathetic conversation agent CAiRE. Our system adapts TransferTransfo (Wolf et al., 2019) learning approach that fine-tunes a large-scale pre-trained language model with multi-task objectives: response language modeling, response prediction and dialogue emotion detection. We evaluate our model on the recently proposed empathetic-dialogues dataset (Rashkin et al., 2019), the experiment results show that CAiRE achieves state-of-the-art performance on dialogue emotion detection and empathetic response generation. | {
"section_name": [
"Introduction",
"User Interface",
"Scalable to Multiple Users",
"Generative Conversational Model",
"Active Learning of Ethical Values and Persona",
"Conclusion"
],
"paragraphs": [
[
"Empathetic chatbots are conversational agents that can understand user emotions and respond appropriately. Incorporating empathy into the dialogue system is essential to achieve better human-robot interaction because naturally, humans express and perceive emotion in natural language to increase their sense of social bonding. In the early development stage of such conversational systems, most of the efforts were put into developing hand-crafted rules of engagement. Recently, a modularized empathetic dialogue system, XiaoIce BIBREF0 achieved an impressive number of conversational turns per session, which was even higher than average conversations between humans. Despite the promising results of XiaoIce, this system is designed using a complex architecture with hundreds of independent components, such as Natural Language Understanding and Response Generation modules, using a tremendous amount of labeled data for training each of them.",
"In contrast to such modularized dialogue system, end-to-end systems learn all components as a single model in a fully data-driven manner, and mitigate the lack of labeled data by sharing representations among different modules. In this paper, we build an end-to-end empathetic chatbot by fine-tuning BIBREF1 the Generative Pre-trained Transformer (GPT) BIBREF2 on the PersonaChat dataset BIBREF3 and the Empathetic-Dialogue dataset BIBREF4 . We establish a web-based user interface which allows multiple users to asynchronously chat with CAiRE online. CAiRE can also collect user feedback and continuously improve its response quality and discard undesirable generation behaviors (e.g. unethical responses) via active learning and negative training."
],
[
"As shown in Figure FIGREF4 , our user interface is based solely on text inputs. Users can type anything in the input box and get a response immediately from the server. A report button is added at the bottom to allow users to report unethical dialogues, which will then be marked and saved in our back-end server separately. To facilitate the need for teaching our chatbot how to respond properly, we add an edit button next to the response. When the user clicks it, a new input box will appear, and the user can type in the appropriate response they think the chatbot should have replied with."
],
[
"Due to the high demand for GPU computations during response generation, the computation cost needs to be well distributed across different GPUs to support multiple users. We adopt several approaches to maximize the utility of GPUs without crashing the system. Firstly, we set up two independent processes in each GTX 1080Ti, where we found the highest GPU utilities to be around 90%, with both processes working stably. Secondly, we employ a load-balancing module to distribute the requests to idle processes based on their working loads. During a stress testing, we simulated users sending requests every 2 seconds, and using 8 GPUs, we were able to support more than 50 concurrent requests."
],
[
"We apply the Generative Pre-trained Transformer (GPT) BIBREF2 as our pre-trained language model. GPT is a multi-layer Transformer decoder with a causal self-attention which is pre-trained, unsupervised, on the BooksCorpus dataset. BooksCorpus dataset contains over 7,000 unique unpublished books from a variety of genres. Pre-training on such large contiguous text corpus enables the model to capture long-range dialogue context information. Furthermore, as existing EmpatheticDialogue dataset BIBREF4 is relatively small, fine-tuning only on such dataset will limit the chitchat topic of the model. Hence, we first integrate persona into CAiRE, and pre-train the model on PersonaChat BIBREF3 , following a previous transfer-learning strategy BIBREF1 . This pre-training procedure allows CAiRE to have a more consistent persona, thus improving the engagement and consistency of the model. We refer interested readers to the code repository recently released by HuggingFace. Finally, in order to optimize empathy in CAiRE, we fine-tune this pre-trained model using EmpatheticDialogue dataset to help CAiRE understand users' feeling."
],
[
"CAiRE was first presented in ACL 2019 keynote talk “Loquentes Machinea: Technology, Applications, and Ethics of Conversational Systems\", and after that, we have released the chatbot to the public. In one week, we received traffic from more than 500 users, along with several reports of unethical dialogues. According to such feedback, CAiRE does not have any sense of ethical value due to the lack of training data informing of inappropriate behavior. Thus, when users raise some ethically concerning questions, CAiRE may respond without considering ethical implications. For example, a user might ask “Would you kill a human?\", and CAiRE could respond “yes, I want!\". To mitigate this issue, we first incorporate ethical values into CAiRE by customizing the persona of it with sentences such as: “my name is caire\", “i want to help humans to make a better world\", “i am a good friend of humans\". Then we perform active learning based on the collected user-revised responses. We observe that this approach can greatly reduce unethical responses. As CAiRE gathers more unethical dialogues and their revisions, its performance can be further improved by negative training BIBREF5 and active learning."
],
[
"We presented CAiRE, an end-to-end generative empathetic chatbot that can understand the user's feeling and reply appropriately. We built a web interface for our model and have made it accessible to multiple users via a web-link. By further collecting user feedback and improving our model, we can make CAiRE more empathetic in the future, which can be a forward step for end-to-end dialogue models. "
]
]
} | {
"question": [
"What is the performance of their system?",
"What evaluation metrics are used?",
"What is the source of the dialogues?",
"What pretrained LM is used?"
],
"question_id": [
"b1ced2d6dcd1d7549be2594396cbda34da6c3bca",
"f3be1a27df2e6ad12eed886a8cd2dfe09b9e2b30",
"a45a86b6a02a98d3ab11f1d04acd3446e95f5a16",
"1f1a9f2dd8c4c10b671cb8affe56e181948e229e"
],
"nlp_background": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"683c7ac7b8f82a05e629705fadda68c7a7901ee3"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"49a2692e19e6a31d01f989196eec52a612e6941c"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"dc015af5c1790a7961c9ce8db12ca35c5ebf3bf4"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Generative Pre-trained Transformer (GPT)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We apply the Generative Pre-trained Transformer (GPT) BIBREF2 as our pre-trained language model. GPT is a multi-layer Transformer decoder with a causal self-attention which is pre-trained, unsupervised, on the BooksCorpus dataset. BooksCorpus dataset contains over 7,000 unique unpublished books from a variety of genres. Pre-training on such large contiguous text corpus enables the model to capture long-range dialogue context information. Furthermore, as existing EmpatheticDialogue dataset BIBREF4 is relatively small, fine-tuning only on such dataset will limit the chitchat topic of the model. Hence, we first integrate persona into CAiRE, and pre-train the model on PersonaChat BIBREF3 , following a previous transfer-learning strategy BIBREF1 . This pre-training procedure allows CAiRE to have a more consistent persona, thus improving the engagement and consistency of the model. We refer interested readers to the code repository recently released by HuggingFace. Finally, in order to optimize empathy in CAiRE, we fine-tune this pre-trained model using EmpatheticDialogue dataset to help CAiRE understand users' feeling."
],
"highlighted_evidence": [
"We apply the Generative Pre-trained Transformer (GPT) BIBREF2 as our pre-trained language model. "
]
},
{
"unanswerable": false,
"extractive_spans": [
"Generative Pre-trained Transformer (GPT)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In contrast to such modularized dialogue system, end-to-end systems learn all components as a single model in a fully data-driven manner, and mitigate the lack of labeled data by sharing representations among different modules. In this paper, we build an end-to-end empathetic chatbot by fine-tuning BIBREF1 the Generative Pre-trained Transformer (GPT) BIBREF2 on the PersonaChat dataset BIBREF3 and the Empathetic-Dialogue dataset BIBREF4 . We establish a web-based user interface which allows multiple users to asynchronously chat with CAiRE online. CAiRE can also collect user feedback and continuously improve its response quality and discard undesirable generation behaviors (e.g. unethical responses) via active learning and negative training."
],
"highlighted_evidence": [
"In this paper, we build an end-to-end empathetic chatbot by fine-tuning BIBREF1 the Generative Pre-trained Transformer (GPT) BIBREF2 on the PersonaChat dataset BIBREF3 and the Empathetic-Dialogue dataset BIBREF4 ."
]
}
],
"annotation_id": [
"11d41939923cf6c20079593e339b2c51ef103daf",
"e3323433b8c95bc0df9e729bf2bcaed12ce067f8"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Table 1: An example of the empathetic dialogue dataset. Two people are discussing a situation that happened to one of them, and that led to the experience of a given feeling.",
"Figure 1: Fine-tuning schema for empathetic dialogues.",
"Table 2: Comparison of different automatic metrics between models. CAiRE outperforms state-of-the-art models.",
"Figure 2: Dialogue examples with CAiRE under happy (right half) and sad (left half) situations."
],
"file": [
"1-Table1-1.png",
"2-Figure1-1.png",
"3-Table2-1.png",
"4-Figure2-1.png"
]
} |
2004.03685 | Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness? | With the growing popularity of deep-learning based NLP models, comes a need for interpretable systems. But what is interpretability, and what constitutes a high-quality interpretation? In this opinion piece we reflect on the current state of interpretability evaluation research. We call for more clearly differentiating between different desired criteria an interpretation should satisfy, and focus on the faithfulness criteria. We survey the literature with respect to faithfulness evaluation, and arrange the current approaches around three assumptions, providing an explicit form to how faithfulness is"defined"by the community. We provide concrete guidelines on how evaluation of interpretation methods should and should not be conducted. Finally, we claim that the current binary definition for faithfulness sets a potentially unrealistic bar for being considered faithful. We call for discarding the binary notion of faithfulness in favor of a more graded one, which we believe will be of greater practical utility. | {
"section_name": [
"Introduction",
"Faithfulness vs. Plausibility",
"Inherently Interpretable?",
"Evaluation via Utility",
"Guidelines for Evaluating Faithfulness",
"Guidelines for Evaluating Faithfulness ::: Be explicit in what you evaluate.",
"Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-judgement on the quality of interpretation.",
"Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-provided gold labels.",
"Guidelines for Evaluating Faithfulness ::: Do not trust “inherent interpretability” claims.",
"Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation of IUI systems should not rely on user performance.",
"Defining Faithfulness",
"Defining Faithfulness ::: Assumption 1 (The Model Assumption).",
"Defining Faithfulness ::: Assumption 2 (The Prediction Assumption).",
"Defining Faithfulness ::: Assumption 3 (The Linearity Assumption).",
"Is Faithful Interpretation Impossible?",
"Towards Better Faithfulness Criteria",
"Conclusion",
"Acknowledgements"
],
"paragraphs": [
[
"Fueled by recent advances in deep-learning and language processing, NLP systems are increasingly being used for prediction and decision-making in many fields BIBREF0, including sensitive ones such as health, commerce and law BIBREF1. Unfortunately, these highly flexible and highly effective neural models are also opaque. There is therefore a critical need for explaining learning-based models' decisions.",
"The emerging research topic of interpretability or explainability has grown rapidly in recent years. Unfortunately, not without growing pains.",
"One such pain is the challenge of defining—and evaluating—what constitutes a quality interpretation. Current approaches define interpretation in a rather ad-hoc manner, motivated by practical use-cases and applications. However, this view often fails to distinguish between distinct aspects of the interpretation's quality, such as readability, plausibility and faithfulness BIBREF2. We argue (§SECREF2, §SECREF5) such conflation is harmful, and that faithfulness should be defined and evaluated explicitly, and independently from plausibility.",
"Our main focus is the evaluation of the faithfulness of an explanation. Intuitively, a faithful interpretation is one that accurately represents the reasoning process behind the model's prediction. We find this to be a pressing issue in explainability: in cases where an explanation is required to be faithful, imperfect or misleading evaluation can have disastrous effects.",
"While literature in this area may implicitly or explicitly evaluate faithfulness for specific explanation techniques, there is no consistent and formal definition of faithfulness. We uncover three assumptions that underlie all these attempts. By making the assumptions explicit and organizing the literature around them, we “connect the dots” between seemingly distinct evaluation methods, and also provide a basis for discussion regarding the desirable properties of faithfulness (§SECREF6).",
"Finally, we observe a trend by which faithfulness is treated as a binary property, followed by showing that an interpretation method is not faithful. We claim that this is unproductive (§SECREF7), as the assumptions are nearly impossible to satisfy fully, and it is all too easy to disprove the faithfulness of an interpretation method via a counter-example. What can be done? We argue for a more practical view of faithfulness, calling for a graded criteria that measures the extent and likelihood of an interpretation to be faithful, in practice (§SECREF8). While we started to work in this area, we pose the exact formalization of these criteria, and concrete evaluations methods for them, as a central challenge to the community for the coming future."
],
[
"There is considerable research effort in attempting to define and categorize the desiderata of a learned system's interpretation, most of which revolves around specific use-cases BIBREF17, BIBREF15.",
"Two particularly notable criteria, each useful for a different purposes, are plausibility and faithfulness. “Plausibility” refers to how convincing the interpretation is to humans, while “faithfulness” refers to how accurately it reflects the true reasoning process of the model BIBREF2, BIBREF18.",
"Naturally, it is possible to satisfy one of these properties without the other. For example, consider the case of interpretation via post-hoc text generation—where an additional “generator” component outputs a textual explanation of the model's decision, and the generator is learned with supervision of textual explanations BIBREF19, BIBREF20, BIBREF21. In this case, plausibility is the dominating property, while there is no faithfulness guarantee.",
"Despite the difference between the two criteria, many authors do not clearly make the distinction, and sometimes conflate the two. Moreoever, the majority of works do not explicitly name the criteria under consideration, even when they clearly belong to one camp or the other.",
"We argue that this conflation is dangerous. For example, consider the case of recidivism prediction, where a judge is exposed to a model's prediction and its interpretation, and the judge believes the interpretation to reflect the model's reasoning process. Since the interpretation's faithfulness carries legal consequences, a plausible but unfaithful interpretation may be the worst-case scenario. The lack of explicit claims by research may cause misinformation to potential users of the technology, who are not versed in its inner workings. Therefore, clear distinction between these terms is critical."
],
[
"A distinction is often made between two methods of achieving interpretability: (1) interpreting existing models via post-hoc techniques; and (2) designing inherently interpretable models. BIBREF29 argues in favor of inherently interpretable models, which by design claim to provide more faithful interpretations than post-hoc interpretation of black-box models.",
"We warn against taking this argumentation at face-value: a method being “inherently interpretable” is merely a claim that needs to be verified before it can be trusted. Indeed, while attention mechanisms have been considered as “inherently interpretable” BIBREF30, BIBREF31, recent work cast doubt regarding their faithfulness BIBREF32, BIBREF33, BIBREF18."
],
[
"While explanations have many different use-cases, such as model debugging, lawful guarantees or health-critical guarantees, one other possible use-case with particularly prominent evaluation literature is Intelligent User Interfaces (IUI), via Human-Computer Interaction (HCI), of automatic models assisting human decision-makers. In this case, the goal of the explanation is to increase the degree of trust between the user and the system, giving the user more nuance towards whether the system's decision is likely correct, or not. In the general case, the final evaluation metric is the performance of the user at their task BIBREF34. For example, BIBREF35 evaluate various explanations of a model in a setting of trivia question answering.",
"However, in the context of faithfulness, we must warn against HCI-inspired evaluation, as well: increased performance in this setting is not indicative of faithfulness; rather, it is indicative of correlation between the plausibility of the explanations and the model's performance.",
"To illustrate, consider the following fictional case of a non-faithful explanation system, in an HCI evaluation setting: the explanation given is a heat-map of the textual input, attributing scores to various tokens. Assume the system explanations behave in the following way: when the output is correct, the explanation consists of random content words; and when the output is incorrect, it consists of random punctuation marks. In other words, the explanation is more likely to appear plausible when the model is correct, while at the same time not reflecting the true decision process of the model. The user, convinced by the nicer-looking explanations, performs better using this system. However, the explanation consistently claimed random tokens to be highly relevant to the model's reasoning process. While the system is concretely useful, the claims given by the explanation do not reflect the model's decisions whatsoever (by design).",
"While the above scenario is extreme, this misunderstanding is not entirely unlikely, since any degree of correlation between plausibility and model performance will result in increased user performance, regardless of any notion of faithfulness."
],
[
"We propose the following guidelines for evaluating the faithfulness of explanations. These guidelines address common pitfalls and sub-optimal practices we observed in the literature."
],
[
"Conflating plausability and faithfulness is harmful. You should be explicit on which one of them you evaluate, and use suitable methodologies for each one. Of course, the same applies when designing interpretation techniques—be clear about which properties are being prioritized."
],
[
"We note that: (1) humans cannot judge if an interpretation is faithful or not: if they understood the model, interpretation would be unnecessary; (2) for similar reasons, we cannot obtain supervision for this problem, either. Therefore, human judgement should not be involved in evaluation for faithfulness, as human judgement measures plausability."
],
[
"We should be able to interpret incorrect model predictions, just the same as correct ones. Evaluation methods that rely on gold labels are influenced by human priors on what should the model do, and again push the evaluation in the direction of plausability."
],
[
"Inherent interpretability is a claim until proven otherwise. Explanations provided by “inherently interpretable” models must be held to the same standards as post-hoc interpretation methods, and be evaluated for faithfulness using the same set of evaluation techniques."
],
[
"End-task user performance in HCI settings is merely indicative of correlation between plausibility and model performance, however small this correlation is. While important to evaluate the utility of the interpretations for some use-cases, it is unrelated to faithfulness."
],
[
"What does it mean for an interpretation method to be faithful? Intuitively, we would like the provided interpretation to reflect the true reasoning process of the model when making a decision. But what is a reasoning process of a model, and how can reasoning processes be compared to each other?",
"Lacking a standard definition, different works evaluate their methods by introducing tests to measure properties that they believe good interpretations should satisfy. Some of these tests measure aspects of faithfulness. These ad-hoc definitions are often unique to each paper and inconsistent with each other, making it hard to find commonalities.",
"We uncover three assumptions that underlie all these methods, enabling us to organize the literature along standardized axes, and relate seemingly distinct lines of work. Moreover, exposing the underlying assumptions enables an informed discussion regarding their validity and merit (we leave such a discussion for future work, by us or others).",
"These assumptions, to our knowledge, encapsulate the current working definitions of faithfulness used by the research community."
],
[
"Two models will make the same predictions if and only if they use the same reasoning process.",
"Corollary 1.1. An interpretation system is unfaithful if it results in different interpretations of models that make the same decisions.",
"As demonstrated by a recent example concerning NLP models, it can be used for proof by counter-example. Theoretically, if all possible models which can perfectly mimic the model's decisions also provide the same interpretations, then they could be deemed faithful. Conversely, showing that two models provide the same results but different interpretations, disprove the faithfulness of the method. BIBREF18 show how these counter-examples can be derived with adversarial training of models which can mimic the original model, yet provide different explanations.",
"Corollary 1.2. An interpretation is unfaithful if it results in different decisions than the model it interprets.",
"A more direct application of the Model Assumption is via the notion of fidelity BIBREF15, BIBREF8. For cases in which the explanation is itself a model capable of making decisions (e.g., decision trees or rule lists BIBREF36), fidelity is defined as the degree to which the explanation model can mimic the original model's decisions (as an accuracy score). For cases where the explanation is not a computable model, BIBREF37 propose a simple way of mapping explanations to decisions via crowd-sourcing, by asking humans to simulate the model's decision without any access to the model, and only access to the input and explanation (termed forward simulation). This idea is further explored and used in practice by BIBREF38."
],
[
"On similar inputs, the model makes similar decisions if and only if its reasoning is similar.",
"Corollary 2. An interpretation system is unfaithful if it provides different interpretations for similar inputs and outputs.",
"Since the interpretation serves as a proxy for the model's “reasoning”, it should satisfy the same constraints. In other words, interpretations of similar decisions should be similar, and interpretations of dissimilar decisions should be dissimilar.",
"This assumption is more useful to disprove the faithfulness of an interpretation rather than prove it, since a disproof requires finding appropriate cases where the assumption doesn't hold, where a proof would require checking a (very large) satisfactory quantity of examples, or even the entire input space.",
"One recent discussion in the NLP community BIBREF33, BIBREF18 concerns the use of this underlying assumption for evaluating attention heat-maps as explanations. The former attempts to provide different explanations of similar decisions per instance. The latter critiques the former and is based more heavily on the model assumption, described above.",
"Additionally, BIBREF39 propose to introduce a constant shift to the input space, and evaluate whether the explanation changes significantly as the final decision stays the same. BIBREF16 formalize a generalization of this technique under the term interpretability robustness: interpretations should be invariant to small perturbations in the input (a direct consequence of the prediction assumption). BIBREF40 further expand on this notion as “consistency of the explanation with respect to the model”. Unfortunately, robustness measures are difficult to apply in NLP settings due to the discrete input."
],
[
"Certain parts of the input are more important to the model reasoning than others. Moreover, the contributions of different parts of the input are independent from each other.",
"Corollary 3. Under certain circumstances, heat-map interpretations can be faithful.",
"This assumption is employed by methods that consider heat-maps (e.g., attention maps) over the input as explanations, particularly popular in NLP. Heat-maps are claims about which parts of the input are more relevant than others to the model's decision. As such, we can design “stress tests” to verify whether they uphold their claims.",
"One method proposed to do so is erasure, where the “most relevant” parts of the input—according to the explanation—are erased from the input, in expectation that the model's decision will change BIBREF25, BIBREF42, BIBREF32. Otherwise, the “least relevant” parts of the input may be erased, in expectation that the model's decision will not change BIBREF43. BIBREF44, BIBREF45 propose two measures of comprehensiveness and sufficiency as a formal generalization of erasure: as the degree by which the model is influenced by the removal of the high-ranking features, or by inclusion of solely the high-ranking features."
],
[
"The aforementioned assumptions are currently utilized to evaluate faithfulness in a binary manner, whether an interpretation is strictly faithful or not. Specifically, they are most often used to show that a method is not faithful, by constructing cases in which the assumptions do not hold for the suggested method. In other words, there is a clear trend of proof via counter-example, for various interpretation methods, that they are not globally faithful.",
"We claim that this is unproductive, as we expect these various methods to consistently result in negative (not faithful) results, continuing the current trend. This follows because an interpretation functions as an approximation of the model or decision's true reasoning process, so it by definition loses information. By the pigeonhole principle, there will be inputs with deviation between interpretation and reasoning.",
"This is observed in practice, in numerous work that show adversarial behavior, or pathological behaviours, that arise from the deeply non-linear and high-dimensional decision boundaries of current models. Furthermore, because we lack supervision regarding which models or decisions are indeed mappable to human-readable concepts, we cannot ignore the approximation errors.",
"This poses a high bar for explanation methods to fulfill, a bar which we estimate will not be overcome soon, if at all. What should we do, then, if we desire a system that provides faithful explanations?"
],
[
"We argue that a way out of this standstill is in a more practical and nuanced methodology for defining and evaluating faithfulness. We propose the following challenge to the community: We must develop formal definition and evaluation for faithfulness that allows us the freedom to say when a method is sufficiently faithful to be useful in practice.",
"We note two possible approaches to this end:",
"Across models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks. Perhaps some models or tasks allow sufficiently faithful interpretation, even if the same is not true for others.",
"For example, the method may not be faithful for some question-answering task, but faithful for movie review sentiment, perhaps based on various syntactic and semantic attributes of those tasks.",
"Across input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves. If we are able to say with some degree of confidence whether a specific decision's explanation is faithful to the model, even if the interpretation method is not considered universally faithful, it can be used with respect to those specific areas or instances only."
],
[
"The opinion proposed in this paper is two-fold:",
"First, interpretability evaluation often conflates evaluating faithfulness and plausibility together. We should tease apart the two definitions and focus solely on evaluating faithfulness without any supervision or influence of the convincing power of the interpretation.",
"Second, faithfulness is often evaluated in a binary “faithful or not faithful” manner, and we believe strictly faithful interpretation is a “unicorn” which will likely never be found. We should instead evaluate faithfulness on a more nuanced “grayscale” that allows interpretations to be useful even if they are not globally and definitively faithful."
],
[
"We thank Yanai Elazar for welcome input on the presentation and organization of the paper. We also thank the reviewers for additional feedback and pointing to relevant literature in HCI and IUI.",
"This project has received funding from the Europoean Research Council (ERC) under the Europoean Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT)."
]
]
} | {
"question": [
"What approaches they propose?",
"What faithfulness criteria does they propose?",
"Which are three assumptions in current approaches for defining faithfulness?",
"Which are key points in guidelines for faithfulness evaluation?"
],
"question_id": [
"eeaceee98ef1f6c971dac7b0b8930ee8060d71c2",
"3371d586a3a81de1552d90459709c57c0b1a2594",
"d4b9cdb4b2dfda1e0d96ab6c3b5e2157fd52685e",
"2a859e80d8647923181cb2d8f9a2c67b1c3f4608"
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat",
"somewhat"
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Across models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks.",
"Across input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We argue that a way out of this standstill is in a more practical and nuanced methodology for defining and evaluating faithfulness. We propose the following challenge to the community: We must develop formal definition and evaluation for faithfulness that allows us the freedom to say when a method is sufficiently faithful to be useful in practice.",
"We note two possible approaches to this end:",
"Across models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks. Perhaps some models or tasks allow sufficiently faithful interpretation, even if the same is not true for others.",
"For example, the method may not be faithful for some question-answering task, but faithful for movie review sentiment, perhaps based on various syntactic and semantic attributes of those tasks.",
"Across input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves. If we are able to say with some degree of confidence whether a specific decision's explanation is faithful to the model, even if the interpretation method is not considered universally faithful, it can be used with respect to those specific areas or instances only."
],
"highlighted_evidence": [
"We propose the following challenge to the community: We must develop formal definition and evaluation for faithfulness that allows us the freedom to say when a method is sufficiently faithful to be useful in practice.\n\nWe note two possible approaches to this end:\n\nAcross models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks. Perhaps some models or tasks allow sufficiently faithful interpretation, even if the same is not true for others.\n\nFor example, the method may not be faithful for some question-answering task, but faithful for movie review sentiment, perhaps based on various syntactic and semantic attributes of those tasks.\n\nAcross input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves. If we are able to say with some degree of confidence whether a specific decision's explanation is faithful to the model, even if the interpretation method is not considered universally faithful, it can be used with respect to those specific areas or instances only."
]
}
],
"annotation_id": [
"316245ebcb9272f264e520aaba6ca8977cc6ce84"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Across models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks.",
"Across input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We argue that a way out of this standstill is in a more practical and nuanced methodology for defining and evaluating faithfulness. We propose the following challenge to the community: We must develop formal definition and evaluation for faithfulness that allows us the freedom to say when a method is sufficiently faithful to be useful in practice.",
"We note two possible approaches to this end:",
"Across models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks. Perhaps some models or tasks allow sufficiently faithful interpretation, even if the same is not true for others.",
"For example, the method may not be faithful for some question-answering task, but faithful for movie review sentiment, perhaps based on various syntactic and semantic attributes of those tasks.",
"Across input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves. If we are able to say with some degree of confidence whether a specific decision's explanation is faithful to the model, even if the interpretation method is not considered universally faithful, it can be used with respect to those specific areas or instances only."
],
"highlighted_evidence": [
"We propose the following challenge to the community: We must develop formal definition and evaluation for faithfulness that allows us the freedom to say when a method is sufficiently faithful to be useful in practice.\n\nWe note two possible approaches to this end:\n\nAcross models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks. Perhaps some models or tasks allow sufficiently faithful interpretation, even if the same is not true for others.\n\nFor example, the method may not be faithful for some question-answering task, but faithful for movie review sentiment, perhaps based on various syntactic and semantic attributes of those tasks.\n\nAcross input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves. If we are able to say with some degree of confidence whether a specific decision's explanation is faithful to the model, even if the interpretation method is not considered universally faithful, it can be used with respect to those specific areas or instances only."
]
}
],
"annotation_id": [
"fe5852b818b41be7805f8bd5267681b08d136aa9"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Two models will make the same predictions if and only if they use the same reasoning process.",
"On similar inputs, the model makes similar decisions if and only if its reasoning is similar.",
"Certain parts of the input are more important to the model reasoning than others. Moreover, the contributions of different parts of the input are independent from each other."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Defining Faithfulness ::: Assumption 1 (The Model Assumption).",
"Two models will make the same predictions if and only if they use the same reasoning process.",
"Defining Faithfulness ::: Assumption 2 (The Prediction Assumption).",
"On similar inputs, the model makes similar decisions if and only if its reasoning is similar.",
"Defining Faithfulness ::: Assumption 3 (The Linearity Assumption).",
"Certain parts of the input are more important to the model reasoning than others. Moreover, the contributions of different parts of the input are independent from each other."
],
"highlighted_evidence": [
"Defining Faithfulness ::: Assumption 1 (The Model Assumption).\nTwo models will make the same predictions if and only if they use the same reasoning process.",
"Defining Faithfulness ::: Assumption 2 (The Prediction Assumption).\nOn similar inputs, the model makes similar decisions if and only if its reasoning is similar.",
"Defining Faithfulness ::: Assumption 3 (The Linearity Assumption).\nCertain parts of the input are more important to the model reasoning than others. Moreover, the contributions of different parts of the input are independent from each other."
]
},
{
"unanswerable": false,
"extractive_spans": [
"Two models will make the same predictions if and only if they use the same reasoning process.",
"On similar inputs, the model makes similar decisions if and only if its reasoning is similar.",
"Certain parts of the input are more important to the model reasoning than others. Moreover, the contributions of different parts of the input are independent from each other."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Defining Faithfulness ::: Assumption 1 (The Model Assumption).",
"Two models will make the same predictions if and only if they use the same reasoning process.",
"Defining Faithfulness ::: Assumption 2 (The Prediction Assumption).",
"On similar inputs, the model makes similar decisions if and only if its reasoning is similar.",
"Defining Faithfulness ::: Assumption 3 (The Linearity Assumption).",
"Certain parts of the input are more important to the model reasoning than others. Moreover, the contributions of different parts of the input are independent from each other."
],
"highlighted_evidence": [
"Defining Faithfulness ::: Assumption 1 (The Model Assumption).\nTwo models will make the same predictions if and only if they use the same reasoning process.",
"Defining Faithfulness ::: Assumption 2 (The Prediction Assumption).\nOn similar inputs, the model makes similar decisions if and only if its reasoning is similar.",
"Defining Faithfulness ::: Assumption 3 (The Linearity Assumption).\nCertain parts of the input are more important to the model reasoning than others. Moreover, the contributions of different parts of the input are independent from each other."
]
}
],
"annotation_id": [
"14fa88f460ee50ed0baae2504deb5a5ca3b5f484",
"a4b2e693f1b7b86496bd8c0deb3498cf724e3e9a"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Be explicit in what you evaluate.",
"Faithfulness evaluation should not involve human-judgement on the quality of interpretation.",
"Faithfulness evaluation should not involve human-provided gold labels.",
"Do not trust “inherent interpretability” claims.",
"Faithfulness evaluation of IUI systems should not rely on user performance."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Guidelines for Evaluating Faithfulness",
"We propose the following guidelines for evaluating the faithfulness of explanations. These guidelines address common pitfalls and sub-optimal practices we observed in the literature.",
"Guidelines for Evaluating Faithfulness ::: Be explicit in what you evaluate.",
"Conflating plausability and faithfulness is harmful. You should be explicit on which one of them you evaluate, and use suitable methodologies for each one. Of course, the same applies when designing interpretation techniques—be clear about which properties are being prioritized.",
"Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-judgement on the quality of interpretation.",
"We note that: (1) humans cannot judge if an interpretation is faithful or not: if they understood the model, interpretation would be unnecessary; (2) for similar reasons, we cannot obtain supervision for this problem, either. Therefore, human judgement should not be involved in evaluation for faithfulness, as human judgement measures plausability.",
"Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-provided gold labels.",
"We should be able to interpret incorrect model predictions, just the same as correct ones. Evaluation methods that rely on gold labels are influenced by human priors on what should the model do, and again push the evaluation in the direction of plausability.",
"Guidelines for Evaluating Faithfulness ::: Do not trust “inherent interpretability” claims.",
"Inherent interpretability is a claim until proven otherwise. Explanations provided by “inherently interpretable” models must be held to the same standards as post-hoc interpretation methods, and be evaluated for faithfulness using the same set of evaluation techniques.",
"Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation of IUI systems should not rely on user performance.",
"End-task user performance in HCI settings is merely indicative of correlation between plausibility and model performance, however small this correlation is. While important to evaluate the utility of the interpretations for some use-cases, it is unrelated to faithfulness."
],
"highlighted_evidence": [
"Guidelines for Evaluating Faithfulness\nWe propose the following guidelines for evaluating the faithfulness of explanations. These guidelines address common pitfalls and sub-optimal practices we observed in the literature.\n\nGuidelines for Evaluating Faithfulness ::: Be explicit in what you evaluate.\nConflating plausability and faithfulness is harmful. You should be explicit on which one of them you evaluate, and use suitable methodologies for each one. Of course, the same applies when designing interpretation techniques—be clear about which properties are being prioritized.\n\nGuidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-judgement on the quality of interpretation.\nWe note that: (1) humans cannot judge if an interpretation is faithful or not: if they understood the model, interpretation would be unnecessary; (2) for similar reasons, we cannot obtain supervision for this problem, either. Therefore, human judgement should not be involved in evaluation for faithfulness, as human judgement measures plausability.\n\nGuidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-provided gold labels.\nWe should be able to interpret incorrect model predictions, just the same as correct ones. Evaluation methods that rely on gold labels are influenced by human priors on what should the model do, and again push the evaluation in the direction of plausability.\n\nGuidelines for Evaluating Faithfulness ::: Do not trust “inherent interpretability” claims.\nInherent interpretability is a claim until proven otherwise. Explanations provided by “inherently interpretable” models must be held to the same standards as post-hoc interpretation methods, and be evaluated for faithfulness using the same set of evaluation techniques.\n\nGuidelines for Evaluating Faithfulness ::: Faithfulness evaluation of IUI systems should not rely on user performance.\nEnd-task user performance in HCI settings is merely indicative of correlation between plausibility and model performance, however small this correlation is. While important to evaluate the utility of the interpretations for some use-cases, it is unrelated to faithfulness."
]
}
],
"annotation_id": [
"16e462bc37f1d3801f50a2585f17821b4174d5e5"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [],
"file": []
} |
1808.03894 | Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference | Deep learning models have achieved remarkable success in natural language inference (NLI) tasks. While these models are widely explored, they are hard to interpret and it is often unclear how and why they actually work. In this paper, we take a step toward explaining such deep learning based models through a case study on a popular neural model for NLI. In particular, we propose to interpret the intermediate layers of NLI models by visualizing the saliency of attention and LSTM gating signals. We present several examples for which our methods are able to reveal interesting insights and identify the critical information contributing to the model decisions. | {
"section_name": [
"Introduction",
"Task and Model",
"Visualization of Attention and Gating",
"Attention",
"LSTM Gating Signals",
"Conclusion"
],
"paragraphs": [
[
"Deep learning has achieved tremendous success for many NLP tasks. However, unlike traditional methods that provide optimized weights for human understandable features, the behavior of deep learning models is much harder to interpret. Due to the high dimensionality of word embeddings, and the complex, typically recurrent architectures used for textual data, it is often unclear how and why a deep learning model reaches its decisions.",
"There are a few attempts toward explaining/interpreting deep learning-based models, mostly by visualizing the representation of words and/or hidden states, and their importances (via saliency or erasure) on shallow tasks like sentiment analysis and POS tagging BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . In contrast, we focus on interpreting the gating and attention signals of the intermediate layers of deep models in the challenging task of Natural Language Inference. A key concept in explaining deep models is saliency, which determines what is critical for the final decision of a deep model. So far, saliency has only been used to illustrate the impact of word embeddings. In this paper, we extend this concept to the intermediate layer of deep models to examine the saliency of attention as well as the LSTM gating signals to understand the behavior of these components and their impact on the final decision.",
"We make two main contributions. First, we introduce new strategies for interpreting the behavior of deep models in their intermediate layers, specifically, by examining the saliency of the attention and the gating signals. Second, we provide an extensive analysis of the state-of-the-art model for the NLI task and show that our methods reveal interesting insights not available from traditional methods of inspecting attention and word saliency.",
"In this paper, our focus was on NLI, which is a fundamental NLP task that requires both understanding and reasoning. Furthermore, the state-of-the-art NLI models employ complex neural architectures involving key mechanisms, such as attention and repeated reading, widely seen in successful models for other NLP tasks. As such, we expect our methods to be potentially useful for other natural understanding tasks as well."
],
[
"In NLI BIBREF4 , we are given two sentences, a premise and a hypothesis, the goal is to decide the logical relationship (Entailment, Neutral, or Contradiction) between them.",
"Many of the top performing NLI models BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , are variants of the ESIM model BIBREF11 , which we choose to analyze in this paper. ESIM reads the sentences independently using LSTM at first, and then applies attention to align/contrast the sentences. Another round of LSTM reading then produces the final representations, which are compared to make the prediction. Detailed description of ESIM can be found in the Appendix.",
"Using the SNLI BIBREF4 data, we train two variants of ESIM, with dimensionality 50 and 300 respectively, referred to as ESIM-50 and ESIM-300 in the remainder of the paper."
],
[
"In this work, we are primarily interested in the internal workings of the NLI model. In particular, we focus on the attention and the gating signals of LSTM readers, and how they contribute to the decisions of the model."
],
[
"Attention has been widely used in many NLP tasks BIBREF12 , BIBREF13 , BIBREF14 and is probably one of the most critical parts that affects the inference decisions. Several pieces of prior work in NLI have attempted to visualize the attention layer to provide some understanding of their models BIBREF5 , BIBREF15 . Such visualizations generate a heatmap representing the similarity between the hidden states of the premise and the hypothesis (Eq. 19 of Appendix). Unfortunately the similarities are often the same regardless of the decision.",
"Let us consider the following example, where the same premise “A kid is playing in the garden”, is paired with three different hypotheses:",
"A kid is taking a nap in the garden",
"A kid is having fun in the garden with her family",
"A kid is having fun in the garden",
" Note that the ground truth relationships are Contradiction, Neutral, and Entailment, respectively.",
"The first row of Fig. 1 shows the visualization of normalized attention for the three cases produced by ESIM-50, which makes correct predictions for all of them. As we can see from the figure, the three attention maps are fairly similar despite the completely different decisions. The key issue is that the attention visualization only allows us to see how the model aligns the premise with the hypothesis, but does not show how such alignment impacts the decision. This prompts us to consider the saliency of attention.",
"The concept of saliency was first introduced in vision for visualizing the spatial support on an image for a particular object class BIBREF16 . In NLP, saliency has been used to study the importance of words toward a final decision BIBREF0 .",
"We propose to examine the saliency of attention. Specifically, given a premise-hypothesis pair and the model's decision $y$ , we consider the similarity between a pair of premise and hypothesis hidden states $e_{ij}$ as a variable. The score of the decision $S(y)$ is thus a function of $e_{ij}$ for all $i$ and $j$ . The saliency of $e_{ij}$ is then defined to be $|\\frac{\\partial S(y)}{\\partial {e_{ij}}}|$ .",
"The second row of Fig. 1 presents the attention saliency map for the three examples acquired by the same ESIM-50 model. Interestingly, the saliencies are clearly different across the examples, each highlighting different parts of the alignment. Specifically, for h1, we see the alignment between “is playing” and “taking a nap” and the alignment of “in a garden” to have the most prominent saliency toward the decision of Contradiction. For h2, the alignment of “kid” and “her family” seems to be the most salient for the decision of Neutral. Finally, for h3, the alignment between “is having fun” and “kid is playing” have the strongest impact toward the decision of Entailment.",
"From this example, we can see that by inspecting the attention saliency, we effectively pinpoint which part of the alignments contribute most critically to the final prediction whereas simply visualizing the attention itself reveals little information.",
"In the previous examples, we study the behavior of the same model on different inputs. Now we use the attention saliency to compare the two different ESIM models: ESIM-50 and ESIM-300.",
"Consider two examples with a shared hypothesis of “A man ordered a book” and premise:",
"John ordered a book from amazon",
"Mary ordered a book from amazon",
" Here ESIM-50 fails to capture the gender connections of the two different names and predicts Neutral for both inputs, whereas ESIM-300 correctly predicts Entailment for the first case and Contradiction for the second.",
"In the first two columns of Fig. 2 (column a and b) we visualize the attention of the two examples for ESIM-50 (left) and ESIM-300 (right) respectively. Although the two models make different predictions, their attention maps appear qualitatively similar.",
"In contrast, columns 3-4 of Fig. 2 (column c and d) present the attention saliency for the two examples by ESIM-50 and ESIM-300 respectively. We see that for both examples, ESIM-50 primarily focused on the alignment of “ordered”, whereas ESIM-300 focused more on the alignment of “John” and “Mary” with “man”. It is interesting to note that ESIM-300 does not appear to learn significantly different similarity values compared to ESIM-50 for the two critical pairs of words (“John”, “man”) and (“Mary”, “man”) based on the attention map. The saliency map, however, reveals that the two models use these values quite differently, with only ESIM-300 correctly focusing on them."
],
[
"LSTM gating signals determine the flow of information. In other words, they indicate how LSTM reads the word sequences and how the information from different parts is captured and combined. LSTM gating signals are rarely analyzed, possibly due to their high dimensionality and complexity. In this work, we consider both the gating signals and their saliency, which is computed as the partial derivative of the score of the final decision with respect to each gating signal.",
"Instead of considering individual dimensions of the gating signals, we aggregate them to consider their norm, both for the signal and for its saliency. Note that ESIM models have two LSTM layers, the first (input) LSTM performs the input encoding and the second (inference) LSTM generates the representation for inference.",
"In Fig. 3 we plot the normalized signal and saliency norms for different gates (input, forget, output) of the Forward input (bottom three rows) and inference (top three rows) LSTMs. These results are produced by the ESIM-50 model for the three examples of Section 3.1, one for each column.",
"From the figure, we first note that the saliency tends to be somewhat consistent across different gates within the same LSTM, suggesting that we can interpret them jointly to identify parts of the sentence important for the model's prediction.",
"Comparing across examples, we see that the saliency curves show pronounced differences across the examples. For instance, the saliency pattern of the Neutral example is significantly different from the other two examples, and heavily concentrated toward the end of the sentence (“with her family”). Note that without this part of the sentence, the relationship would have been Entailment. The focus (evidenced by its strong saliency and strong gating signal) on this particular part, which presents information not available from the premise, explains the model's decision of Neutral.",
"Comparing the behavior of the input LSTM and the inference LSTM, we observe interesting shifts of focus. In particular, we see that the inference LSTM tends to see much more concentrated saliency over key parts of the sentence, whereas the input LSTM sees more spread of saliency. For example, for the Contradiction example, the input LSTM sees high saliency for both “taking” and “in”, whereas the inference LSTM primarily focuses on “nap”, which is the key word suggesting a Contradiction. Note that ESIM uses attention between the input and inference LSTM layers to align/contrast the sentences, hence it makes sense that the inference LSTM is more focused on the critical differences between the sentences. This is also observed for the Neutral example as well.",
"It is worth noting that, while revealing similar general trends, the backward LSTM can sometimes focus on different parts of the sentence (e.g., see Fig. 11 of Appendix), suggesting the forward and backward readings provide complementary understanding of the sentence."
],
[
"We propose new visualization and interpretation strategies for neural models to understand how and why they work. We demonstrate the effectiveness of the proposed strategies on a complex task (NLI). Our strategies are able to provide interesting insights not achievable by previous explanation techniques. Our future work will extend our study to consider other NLP tasks and models with the goal of producing useful insights for further improving these models. Model In this section we describe the ESIM model. We divide ESIM to three main parts: 1) input encoding, 2) attention, and 3) inference. Figure 4 demonstrates a high-level view of the ESIM framework. Let $u=[u_1, \\cdots , u_n]$ and $v=[v_1, \\cdots , v_m]$ be the given premise with length $n$ and hypothesis with length $m$ respectively, where $u_i, v_j \\in \\mathbb {R}^r$ are word embeddings of $r$ -dimensional vector. The goal is to predict a label $y$ that indicates the logical relationship between premise $u$ and hypothesis $v$ . Below we briefly explain the aforementioned parts. Input Encoding It utilizes a bidirectional LSTM (BiLSTM) for encoding the given premise and hypothesis using Equations 16 and 17 respectively. ",
"$$\\hat{u} \\in \\mathbb {R}^{n \\times 2d}$$ (Eq. ) ",
"$$\\hat{v} \\in \\mathbb {R}^{m \\times 2d}$$ (Eq. ) where $u$ and $v=[v_1, \\cdots , v_m]$0 are the reading sequences of $v=[v_1, \\cdots , v_m]$1 and $v=[v_1, \\cdots , v_m]$2 respectively. Attention It employs a soft alignment method to associate the relevant sub-components between the given premise and hypothesis. Equation 19 (energy function) computes the unnormalized attention weights as the similarity of hidden states of the premise and hypothesis. ",
"$$u$$ (Eq. ) where $v=[v_1, \\cdots , v_m]$3 and $v=[v_1, \\cdots , v_m]$4 are the hidden representations of $v=[v_1, \\cdots , v_m]$5 and $v=[v_1, \\cdots , v_m]$6 respectively which are computed earlier in Equations 16 and 17 . Next, for each word in either premise or hypothesis, the relevant semantics in the other sentence is extracted and composed according to $v=[v_1, \\cdots , v_m]$7 . Equations 20 and 21 provide formal and specific details of this procedure. ",
"$$\\tilde{v}_j$$ (Eq. ) ",
"$$\\hat{u}$$ (Eq. ) where $v=[v_1, \\cdots , v_m]$8 represents the extracted relevant information of $v=[v_1, \\cdots , v_m]$9 by attending to $n$0 while $n$1 represents the extracted relevant information of $n$2 by attending to $n$3 . Next, it passes the enriched information through a projector layer which produce the final output of attention stage. Equations 22 and 23 formally represent this process. ",
"$$p$$ (Eq. ) ",
"$$q$$ (Eq. ) Here $n$4 stands for element-wise product while $n$5 and $n$6 are the trainable weights and biases of the projector layer respectively. $n$7 and $n$8 indicate the output of attention devision for premise and hypothesis respectively. Inference During this phase, it uses another BiLSTM to aggregate the two sequences of computed matching vectors, $n$9 and $m$0 from the attention stage (Equations 27 and 28 ). ",
"$$\\emph {softmax}$$ (Eq. ) ",
"$$\\hat{u} = \\textit {BiLSTM}(u)$$ (Eq. 16) where $m$1 and $m$2 are the reading sequences of $m$3 and $m$4 respectively. Finally the concatenation max and average pooling of $m$5 and $m$6 are pass through a multilayer perceptron (MLP) classifier that includes a hidden layer with $m$7 activation and $m$8 output layer. The model is trained in an end-to-end manner. Attention Study Here we provide more examples on the NLI task which intend to examine specific behavior in this model. Such examples indicate interesting observation that we can analyze them in the future works. Table 1 shows the list of all example. LSTM Gating Signal Finally, Figure 11 depicts the backward LSTM gating signals study. "
]
]
} | {
"question": [
"Did they use the state-of-the-art model to analyze the attention?",
"What is the performance of their model?",
"How many layers are there in their model?",
"Did they compare with gradient-based methods?"
],
"question_id": [
"aceac4ad16ffe1af0f01b465919b1d4422941a6b",
"f7070b2e258beac9b09514be2bfcc5a528cc3a0e",
"2efdcebebeb970021233553104553205ce5d6567",
"4fa851d91388f0803e33f6cfae519548598cd37c"
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"saliency ",
"saliency ",
"saliency ",
"saliency "
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"we provide an extensive analysis of the state-of-the-art model"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We make two main contributions. First, we introduce new strategies for interpreting the behavior of deep models in their intermediate layers, specifically, by examining the saliency of the attention and the gating signals. Second, we provide an extensive analysis of the state-of-the-art model for the NLI task and show that our methods reveal interesting insights not available from traditional methods of inspecting attention and word saliency."
],
"highlighted_evidence": [
"Second, we provide an extensive analysis of the state-of-the-art model for the NLI task and show that our methods reveal interesting insights not available from traditional methods of inspecting attention and word saliency."
]
}
],
"annotation_id": [
"1fb10ae548c0b8f22265dc424f2cdb226af29f3a"
],
"worker_id": [
"7803ba8358058c0f83a7d1e93e15ad3f404db5a5"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
},
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"88048b9005086840ef8aba4be020ac80eb535fe4",
"bfd7582ed45add9592a3867d2605328e1bb8f1e8"
],
"worker_id": [
"ca2a4695129d0180768a955fb5910d639f79aa34",
"7803ba8358058c0f83a7d1e93e15ad3f404db5a5"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"two LSTM layers"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Instead of considering individual dimensions of the gating signals, we aggregate them to consider their norm, both for the signal and for its saliency. Note that ESIM models have two LSTM layers, the first (input) LSTM performs the input encoding and the second (inference) LSTM generates the representation for inference."
],
"highlighted_evidence": [
"Note that ESIM models have two LSTM layers, the first (input) LSTM performs the input encoding and the second (inference) LSTM generates the representation for inference."
]
}
],
"annotation_id": [
"dd0509842a39dbf66c91a3440bc7d5c8b7f6c7cf"
],
"worker_id": [
"7803ba8358058c0f83a7d1e93e15ad3f404db5a5"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"d68ca747f203baabd6821f505821f69922c77f98"
],
"worker_id": [
"7803ba8358058c0f83a7d1e93e15ad3f404db5a5"
]
}
]
} | {
"caption": [
"Figure 1: Normalized attention and attention saliency visualization. Each column shows visualization of one sample. Top plots depict attention visualization and bottom ones represent attention saliency visualization. Predicted (the same as Gold) label of each sample is shown on top of each column."
],
"file": [
"2-Figure1-1.png"
]
} |
1703.04617 | Exploring Question Understanding and Adaptation in Neural-Network-Based Question Answering | The last several years have seen intensive interest in exploring neural-network-based models for machine comprehension (MC) and question answering (QA). In this paper, we approach the problems by closely modelling questions in a neural network framework. We first introduce syntactic information to help encode questions. We then view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results over a competitive baseline. | {
"section_name": [
"Introduction",
"Related Work",
"The Baseline Model",
"Question Understanding and Adaptation",
"Set-Up",
"Results",
"Conclusions"
],
"paragraphs": [
[
"Enabling computers to understand given documents and answer questions about their content has recently attracted intensive interest, including but not limited to the efforts as in BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Many specific problems such as machine comprehension and question answering often involve modeling such question-document pairs.",
"The recent availability of relatively large training datasets (see Section \"Related Work\" for more details) has made it more feasible to train and estimate rather complex models in an end-to-end fashion for these problems, in which a whole model is fit directly with given question-answer tuples and the resulting model has shown to be rather effective.",
"In this paper, we take a closer look at modeling questions in such an end-to-end neural network framework, since we regard question understanding is of importance for such problems. We first introduced syntactic information to help encode questions. We then viewed and modelled different types of questions and the information shared among them as an adaptation problem and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results on our competitive baselines."
],
[
"Recent advance on reading comprehension and question answering has been closely associated with the availability of various datasets. BIBREF0 released the MCTest data consisting of 500 short, fictional open-domain stories and 2000 questions. The CNN/Daily Mail dataset BIBREF1 contains news articles for close style machine comprehension, in which only entities are removed and tested for comprehension. Children's Book Test (CBT) BIBREF2 leverages named entities, common nouns, verbs, and prepositions to test reading comprehension. The Stanford Question Answering Dataset (SQuAD) BIBREF3 is more recently released dataset, which consists of more than 100,000 questions for documents taken from Wikipedia across a wide range of topics. The question-answer pairs are annotated through crowdsourcing. Answers are spans of text marked in the original documents. In this paper, we use SQuAD to evaluate our models.",
"Many neural network models have been studied on the SQuAD task. BIBREF6 proposed match LSTM to associate documents and questions and adapted the so-called pointer Network BIBREF7 to determine the positions of the answer text spans. BIBREF8 proposed a dynamic chunk reader to extract and rank a set of answer candidates. BIBREF9 focused on word representation and presented a fine-grained gating mechanism to dynamically combine word-level and character-level representations based on the properties of words. BIBREF10 proposed a multi-perspective context matching (MPCM) model, which matched an encoded document and question from multiple perspectives. BIBREF11 proposed a dynamic decoder and so-called highway maxout network to improve the effectiveness of the decoder. The bi-directional attention flow (BIDAF) BIBREF12 used the bi-directional attention to obtain a question-aware context representation.",
"In this paper, we introduce syntactic information to encode questions with a specific form of recursive neural networks BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . More specifically, we explore a tree-structured LSTM BIBREF13 , BIBREF14 which extends the linear-chain long short-term memory (LSTM) BIBREF17 to a recursive structure, which has the potential to capture long-distance interactions over the structures.",
"Different types of questions are often used to seek for different types of information. For example, a \"what\" question could have very different property from that of a \"why\" question, while they may share information and need to be trained together instead of separately. We view this as a \"adaptation\" problem to let different types of questions share a basic model but still discriminate them when needed. Specifically, we are motivated by the ideas \"i-vector\" BIBREF18 in speech recognition, where neural network based adaptation is performed among different (groups) of speakers and we focused instead on different types of questions here."
],
[
"Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction. Below we discuss these components in more details.",
"We concatenate embedding at two levels to represent a word: the character composition and word-level embedding. The character composition feeds all characters of a word into a convolutional neural network (CNN) BIBREF19 to obtain a representation for the word. And we use the pre-trained 300-D GloVe vectors BIBREF20 (see the experiment section for details) to initialize our word-level embedding. Each word is therefore represented as the concatenation of the character-composition vector and word-level embedding. This is performed on both questions and documents, resulting in two matrices: the $\\mathbf {Q}^e \\in \\mathbb {R} ^{N\\times d_w}$ for a question and the $\\mathbf {D}^e \\in \\mathbb {R} ^{M\\times d_w}$ for a document, where $N$ is the question length (number of word tokens), $M$ is the document length, and $d_w$ is the embedding dimensionality.",
"The above word representation focuses on representing individual words, and an input encoder here employs recurrent neural networks to obtain the representation of a word under its context. We use bi-directional GRU (BiGRU) BIBREF21 for both documents and questions.",
"$${\\mathbf {Q}^c_i}&=\\text{BiGRU}(\\mathbf {Q}^e_i,i),\\forall i \\in [1, \\dots , N] \\\\\n{\\mathbf {D}^c_j}&=\\text{BiGRU}(\\mathbf {D}^e_j,j),\\forall j \\in [1, \\dots , M]$$ (Eq. 5) ",
"A BiGRU runs a forward and backward GRU on a sequence starting from the left and the right end, respectively. By concatenating the hidden states of these two GRUs for each word, we obtain the a representation for a question or document: $\\mathbf {Q}^c \\in \\mathbb {R} ^{N\\times d_c}$ for a question and $\\mathbf {D}^c \\in \\mathbb {R} ^{M\\times d_c}$ for a document.",
"Questions and documents interact closely. As in most previous work, our framework use both soft attention over questions and that over documents to capture the interaction between them. More specifically, in this soft-alignment layer, we first feed the contextual representation matrix $\\mathbf {Q}^c$ and $\\mathbf {D}^c$ to obtain alignment matrix $\\mathbf {U} \\in \\mathbb {R} ^{N\\times M}$ : ",
"$$\\mathbf {U}_{ij} =\\mathbf {Q}_i^c \\cdot \\mathbf {D}_j^{c\\mathrm {T}}, \\forall i \\in [1, \\dots , N], \\forall j \\in [1, \\dots , M]$$ (Eq. 7) ",
"Each $\\mathbf {U}_{ij}$ represents the similarity between a question word $\\mathbf {Q}_i^c$ and a document word $\\mathbf {D}_j^c$ .",
"Word-level Q-code Similar as in BIBREF12 , we obtain a word-level Q-code. Specifically, for each document word $w_j$ , we find which words in the question are relevant to it. To this end, $\\mathbf {a}_j\\in \\mathbb {R} ^{N}$ is computed with the following equation and used as a soft attention weight: ",
"$$\\mathbf {a}_j = softmax(\\mathbf {U}_{:j}), \\forall j \\in [1, \\dots , M]$$ (Eq. 8) ",
"With the attention weights computed, we obtain the encoding of the question for each document word $w_j$ as follows, which we call word-level Q-code in this paper: ",
"$$\\mathbf {Q}^w=\\mathbf {a}^{\\mathrm {T}} \\cdot \\mathbf {Q}^{c} \\in \\mathbb {R} ^{M\\times d_c}$$ (Eq. 9) ",
"Question-based filtering To better explore question understanding, we design this question-based filtering layer. As detailed later, different question representation can be easily incorporated to this layer in addition to being used as a filter to find key information in the document based on the question. This layer is expandable with more complicated question modeling.",
"In the basic form of question-based filtering, for each question word $w_i$ , we find which words in the document are associated. Similar to $\\mathbf {a}_j$ discussed above, we can obtain the attention weights on document words for each question word $w_i$ : ",
"$$\\mathbf {b}_i=softmax(\\mathbf {U}_{i:})\\in \\mathbb {R} ^{M}, \\forall i \\in [1, \\dots , N]$$ (Eq. 10) ",
"By pooling $\\mathbf {b}\\in \\mathbb {R} ^{N\\times M}$ , we can obtain a question-based filtering weight $\\mathbf {b}^f$ : ",
"$$\\mathbf {b}^f=norm(pooling(\\mathbf {b})) \\in \\mathbb {R} ^{M}$$ (Eq. 11) ",
"$$norm(\\mathbf {x})=\\frac{\\mathbf {x}}{\\sum _i x_i}$$ (Eq. 12) ",
"where the specific pooling function we used include max-pooling and mean-pooling. Then the document softly filtered based on the corresponding question $\\mathbf {D}^f$ can be calculated by: ",
"$$\\mathbf {D}_j^{f_{max}}=b^{f_{max}}_j \\mathbf {D}_j^{c}, \\forall j \\in [1, \\dots , M]$$ (Eq. 13) ",
"$$\\mathbf {D}_j^{f_{mean}}=b^{f_{mean}}_j \\mathbf {D}_j^{c}, \\forall j \\in [1, \\dots , M]$$ (Eq. 14) ",
"Through concatenating the document representation $\\mathbf {D}^c$ , word-level Q-code $\\mathbf {Q}^w$ and question-filtered document $\\mathbf {D}^f$ , we can finally obtain the alignment layer representation: ",
"$$\\mathbf {I}=[\\mathbf {D}^c, \\mathbf {Q}^w,\\mathbf {D}^c \\circ \\mathbf {Q}^w,\\mathbf {D}^c - \\mathbf {Q}^w, \\mathbf {D}^f, \\mathbf {b}^{f_{max}}, \\mathbf {b}^{f_{mean}}] \\in \\mathbb {R} ^{M \\times (6d_c+2)}$$ (Eq. 16) ",
"where \" $\\circ $ \" stands for element-wise multiplication and \" $-$ \" is simply the vector subtraction.",
"After acquiring the local alignment representation, key information in document and question has been collected, and the aggregation layer is then performed to find answers. We use three BiGRU layers to model the process that aggregates local information to make the global decision to find the answer spans. We found a residual architecture BIBREF22 as described in Figure 2 is very effective in this aggregation process: ",
"$$\\mathbf {I}^1_i=\\text{BiGRU}(\\mathbf {I}_i)$$ (Eq. 18) ",
"$$\\mathbf {I}^2_i=\\mathbf {I}^1_i + \\text{BiGRU}(\\mathbf {I}^1_i)$$ (Eq. 19) ",
"The SQuAD QA task requires a span of text to answer a question. We use a pointer network BIBREF7 to predict the starting and end position of answers as in BIBREF6 . Different from their methods, we use a two-directional prediction to obtain the positions. For one direction, we first predict the starting position of the answer span followed by predicting the end position, which is implemented with the following equations: ",
"$$P(s+)=softmax(W_{s+}\\cdot I^3)$$ (Eq. 23) ",
"$$P(e+)=softmax(W_{e+} \\cdot I^3 + W_{h+} \\cdot h_{s+})$$ (Eq. 24) ",
"where $\\mathbf {I}^3$ is inference layer output, $\\mathbf {h}_{s+}$ is the hidden state of the first step, and all $\\mathbf {W}$ are trainable matrices. We also perform this by predicting the end position first and then the starting position: ",
"$$P(e-)=softmax(W_{e-}\\cdot I^3)$$ (Eq. 25) ",
"$$P(s-)=softmax(W_{s-} \\cdot I^3 + W_{h-} \\cdot h_{e-})$$ (Eq. 26) ",
"We finally identify the span of an answer with the following equation: ",
"$$P(s)=pooling([P(s+), P(s-)])$$ (Eq. 27) ",
"$$P(e)=pooling([P(e+), P(e-)])$$ (Eq. 28) ",
"We use the mean-pooling here as it is more effective on the development set than the alternatives such as the max-pooling."
],
[
"The interplay of syntax and semantics of natural language questions is of interest for question representation. We attempt to incorporate syntactic information in questions representation with TreeLSTM BIBREF13 , BIBREF14 . In general a TreeLSTM could perform semantic composition over given syntactic structures.",
"Unlike the chain-structured LSTM BIBREF17 , the TreeLSTM captures long-distance interaction on a tree. The update of a TreeLSTM node is described at a high level with Equation ( 31 ), and the detailed computation is described in (–). Specifically, the input of a TreeLSTM node is used to configure four gates: the input gate $\\mathbf {i}_t$ , output gate $\\mathbf {o}_t$ , and the two forget gates $\\mathbf {f}_t^L$ for the left child input and $\\mathbf {f}_t^R$ for the right. The memory cell $\\mathbf {c}_t$ considers each child's cell vector, $\\mathbf {c}_{t-1}^L$ and $\\mathbf {c}_{t-1}^R$ , which are gated by the left forget gate $\\mathbf {f}_t^L$ and right forget gate $\\mathbf {f}_t^R$ , respectively.",
"$$\\mathbf {h}_t &= \\text{TreeLSTM}(\\mathbf {x}_t, \\mathbf {h}_{t-1}^L, \\mathbf {h}_{t-1}^R), \\\\\n\n\\mathbf {h}_t &= \\mathbf {o}_t \\circ \\tanh (\\mathbf {c}_{t}),\\\\\n\\mathbf {o}_t &= \\sigma (\\mathbf {W}_o \\mathbf {x}_t + \\mathbf {U}_o^L \\mathbf {h}_{t-1}^L + \\mathbf {U}_o^R \\mathbf {h}_{t-1}^R), \\\\\\mathbf {c}_t &= \\mathbf {f}_t^L \\circ \\mathbf {c}_{t-1}^L + \\mathbf {f}_t^R \\circ \\mathbf {c}_{t-1}^R + \\mathbf {i}_t \\circ \\mathbf {u}_t, \\\\\\mathbf {f}_t^L &= \\sigma (\\mathbf {W}_f \\mathbf {x}_t + \\mathbf {U}_f^{LL} \\mathbf {h}_{t-1}^L + \\mathbf {U}_f^{LR} \\mathbf {h}_{t-1}^R),\\\\\n\\mathbf {f}_t^R &= \\sigma (\\mathbf {W}_f \\mathbf {x}_t + \\mathbf {U}_f^{RL} \\mathbf {h}_{t-1}^L + \\mathbf {U}_f^{RR} \\mathbf {h}_{t-1}^R), \\\\\\mathbf {i}_t &= \\sigma (\\mathbf {W}_i \\mathbf {x}_t + \\mathbf {U}_i^L \\mathbf {h}_{t-1}^L + \\mathbf {U}_i^R \\mathbf {h}_{t-1}^R), \\\\\\mathbf {u}_t &= \\tanh (\\mathbf {W}_c \\mathbf {x}_t + \\mathbf {U}_c^L \\mathbf {h}_{t-1}^L + \\mathbf {U}_c^R \\mathbf {h}_{t-1}^R),$$ (Eq. 31) ",
"where $\\sigma $ is the sigmoid function, $\\circ $ is the element-wise multiplication of two vectors, and all $\\mathbf {W}$ , $\\mathbf {U}$ are trainable matrices.",
"To obtain the parse tree information, we use Stanford CoreNLP (PCFG Parser) BIBREF23 , BIBREF24 to produce a binarized constituency parse for each question and build the TreeLSTM based on the parse tree. The root node of TreeLSTM is used as the representation for the whole question. More specifically, we use it as TreeLSTM Q-code $\\mathbf {Q}^{TL}\\in \\mathbb {R} ^{d_c}$ , by not only simply concatenating it to the alignment layer output but also using it as a question filter, just as we discussed in the question-based filtering section: ",
"$$\\mathbf {Q}^{TL}=\\text{TreeLSTM}(\\mathbf {Q}^e) \\in \\mathbb {R} ^{d_c}$$ (Eq. 32) ",
"$$\\mathbf {b}^{TL}=norm(\\mathbf {Q}^{TL} \\cdot \\mathbf {D}^{c\\mathrm {T}}) \\in \\mathbb {R} ^{M}$$ (Eq. 33) ",
"where $\\mathbf {I}_{new}$ is the new output of alignment layer, and function $repmat$ copies $\\mathbf {Q}^{TL}$ for M times to fit with $\\mathbf {I}$ .",
"Questions by nature are often composed to fulfill different types of information needs. For example, a \"when\" question seeks for different types of information (i.e., temporal information) than those for a \"why\" question. Different types of questions and the corresponding answers could potentially have different distributional regularity.",
"The previous models are often trained for all questions without explicitly discriminating different question types; however, for a target question, both the common features shared by all questions and the specific features for a specific type of question are further considered in this paper, as they could potentially obey different distributions. In this paper we further explicitly model different types of questions in the end-to-end training. We start from a simple way to first analyze the word frequency of all questions, and obtain top-10 most frequent question types: what, how, who, when, which, where, why, be, whose, and whom, in which be stands for the questions beginning with different forms of the word be such as is, am, and are. We explicitly encode question-type information to be an 11-dimensional one-hot vector (the top-10 question types and \"other\" question type). Each question type is with a trainable embedding vector. We call this explicit question type code, $\\mathbf {ET}\\in \\mathbb {R} ^{d_{ET}}$ . Then the vector for each question type is tuned during training, and is added to the system with the following equation: ",
"$$\\mathbf {I}_{new}=[\\mathbf {I}, repmat(\\mathbf {ET})]$$ (Eq. 38) ",
"As discussed, different types of questions and their answers may share common regularity and have separate property at the same time. We also view this as an adaptation problem in order to let different types of questions share a basic model but still discriminate them when needed. Specifically, we borrow ideas from speaker adaptation BIBREF18 in speech recognition, where neural-network-based adaptation is performed among different groups of speakers.",
"Conceptually we regard a type of questions as a group of acoustically similar speakers. Specifically we propose a question discriminative block or simply called a discriminative block (Figure 3 ) below to perform question adaptation. The main idea is described below: ",
"$$\\mathbf {x^\\prime } = f([\\mathbf {x}, \\mathbf {\\bar{x}}^c, \\mathbf {\\delta _x}])$$ (Eq. 40) ",
"For each input question $\\mathbf {x}$ , we can decompose it to two parts: the cluster it belong(i.e., question type) and the diverse in the cluster. The information of the cluster is encoded in a vector $\\mathbf {\\bar{x}}^c$ . In order to keep calculation differentiable, we compute the weight of all the clusters based on the distances of $\\mathbf {x}$ and each cluster center vector, in stead of just choosing the closest cluster. Then the discriminative vector $\\mathbf {\\delta _x}$ with regard to these most relevant clusters are computed. All this information is combined to obtain the discriminative information. In order to keep the full information of input, we also copy the input question $\\mathbf {x}$ , together with the acquired discriminative information, to a feed-forward layer to obtain a new representation $\\mathbf {x^\\prime }$ for the question.",
"More specifically, the adaptation algorithm contains two steps: adapting and updating, which is detailed as follows:",
"Adapting In the adapting step, we first compute the similarity score between an input question vector $\\mathbf {x}\\in \\mathbb {R} ^{h}$ and each centroid vector of $K$ clusters $~\\mathbf {\\bar{x}}\\in \\mathbb {R} ^{K \\times h}$ . Each cluster here models a question type. Unlike the explicit question type modeling discussed above, here we do not specify what question types we are modeling but let the system to learn. Specifically, we only need to pre-specific how many clusters, $K$ , we are modeling. The similarity between an input question and cluster centroid can be used to compute similarity weight $\\mathbf {w}^a$ : ",
"$$w_k^a = softmax(cos\\_sim(\\mathbf {x}, \\mathbf {\\bar{x}}_k), \\alpha ), \\forall k \\in [1, \\dots , K]$$ (Eq. 43) ",
"$$cos\\_sim(\\mathbf {u}, \\mathbf {v}) = \\frac{<\\mathbf {u},\\mathbf {v}>}{||\\mathbf {u}|| \\cdot ||\\mathbf {v}||}$$ (Eq. 44) ",
"We set $\\alpha $ equals 50 to make sure only closest class will have a high weight while maintain differentiable. Then we acquire a soft class-center vector $\\mathbf {\\bar{x}}^c$ : ",
"$$\\mathbf {\\bar{x}}^c = \\sum _k w^a_k \\mathbf {\\bar{x}}_k \\in \\mathbb {R} ^{h}$$ (Eq. 46) ",
"We then compute a discriminative vector $\\mathbf {\\delta _x}$ between the input question with regard to the soft class-center vector: ",
"$$\\mathbf {\\delta _x} = \\mathbf {x} - \\mathbf {\\bar{x}}^c$$ (Eq. 47) ",
"Note that $\\bar{\\mathbf {x}}^c$ here models the cluster information and $\\mathbf {\\delta _x}$ represents the discriminative information in the cluster. By feeding $\\mathbf {x}$ , $\\bar{\\mathbf {x}}^c$ and $\\mathbf {\\delta _x}$ into feedforward layer with Relu, we obtain $\\mathbf {x^{\\prime }}\\in \\mathbb {R} ^{K}$ : ",
"$$\\mathbf {x^{\\prime }} = Relu(\\mathbf {W} \\cdot [\\mathbf {x},\\bar{\\mathbf {x}}^c,\\mathbf {\\delta _x}])$$ (Eq. 48) ",
"With $\\mathbf {x^{\\prime }}$ ready, we can apply Discriminative Block to any question code and obtain its adaptation Q-code. In this paper, we use TreeLSTM Q-code as the input vector $\\mathbf {x}$ , and obtain TreeLSTM adaptation Q-code $\\mathbf {Q}^{TLa}\\in \\mathbb {R} ^{d_c}$ . Similar to TreeLSTM Q-code $\\mathbf {Q}^{TL}$ , we concatenate $\\mathbf {Q}^{TLa}$ to alignment output $\\mathbf {I}$ and also use it as a question filter: ",
"$$\\mathbf {Q}^{TLa} = Relu(\\mathbf {W} \\cdot [\\mathbf {Q}^{TL},\\overline{\\mathbf {Q}^{TL}}^c,\\mathbf {\\delta _{\\mathbf {Q}^{TL}}}])$$ (Eq. 49) ",
"$$\\mathbf {b}^{TLa}=norm(\\mathbf {Q}^{TLa} \\cdot \\mathbf {D}^{c\\mathrm {T}}) \\in \\mathbb {R} ^{M}$$ (Eq. 50) ",
"Updating The updating stage attempts to modify the center vectors of the $K$ clusters in order to fit each cluster to model different types of questions. The updating is performed according to the following formula: ",
"$$\\mathbf {\\bar{x}^{\\prime }}_k = (1-\\beta \\text{w}_k^a)\\mathbf {\\bar{x}}_k+\\beta \\text{w}_k^a\\mathbf {x}, \\forall k \\in [1, \\dots , K]$$ (Eq. 54) ",
"In the equation, $\\beta $ is an updating rate used to control the amount of each updating, and we set it to 0.01. When $\\mathbf {x}$ is far away from $K$ -th cluster center $\\mathbf {\\bar{x}}_k$ , $\\text{w}_k^a$ is close to be value 0 and the $k$ -th cluster center $\\mathbf {\\bar{x}}_k$ tends not to be updated. If $\\mathbf {x}$ is instead close to the $j$ -th cluster center $\\mathbf {\\bar{x}}_j$ , $\\mathbf {x}$0 is close to the value 1 and the centroid of the $\\mathbf {x}$1 -th cluster $\\mathbf {x}$2 will be updated more aggressively using $\\mathbf {x}$3 ."
],
[
"We test our models on Stanford Question Answering Dataset (SQuAD) BIBREF3 . The SQuAD dataset consists of more than 100,000 questions annotated by crowdsourcing workers on a selected set of Wikipedia articles, and the answer to each question is a span of text in the Wikipedia articles. Training data includes 87,599 instances and validation set has 10,570 instances. The test data is hidden and kept by the organizer. The evaluation of SQuAD is Exact Match (EM) and F1 score.",
"We use pre-trained 300-D Glove 840B vectors BIBREF20 to initialize our word embeddings. Out-of-vocabulary (OOV) words are initialized randomly with Gaussian samples. CharCNN filter length is 1,3,5, each is 50 dimensions. All vectors including word embedding are updated during training. The cluster number K in discriminative block is 100. The Adam method BIBREF25 is used for optimization. And the first momentum is set to be 0.9 and the second 0.999. The initial learning rate is 0.0004 and the batch size is 32. We will half learning rate when meet a bad iteration, and the patience is 7. Our early stop evaluation is the EM and F1 score of validation set. All hidden states of GRUs, and TreeLSTMs are 500 dimensions, while word-level embedding $d_w$ is 300 dimensions. We set max length of document to 500, and drop the question-document pairs beyond this on training set. Explicit question-type dimension $d_{ET}$ is 50. We apply dropout to the Encoder layer and aggregation layer with a dropout rate of 0.5."
],
[
"Table 1 shows the official leaderboard on SQuAD test set when we submitted our system. Our model achieves a 68.73% EM score and 77.39% F1 score, which is ranked among the state of the art single models (without model ensembling).",
"Table 2 shows the ablation performances of various Q-code on the development set. Note that since the testset is hidden from us, we can only perform such an analysis on the development set. Our baseline model using no Q-code achieved a 68.00% and 77.36% EM and F1 scores, respectively. When we added the explicit question type T-code into the baseline model, the performance was improved slightly to 68.16%(EM) and 77.58%(F1). We then used TreeLSTM introduce syntactic parses for question representation and understanding (replacing simple question type as question understanding Q-code), which consistently shows further improvement. We further incorporated the soft adaptation. When letting the number of hidden question types ( $K$ ) to be 20, the performance improves to 68.73%/77.74% on EM and F1, respectively, which corresponds to the results of our model reported in Table 1 . Furthermore, after submitted our result, we have experimented with a large value of $K$ and found that when $K=100$ , we can achieve a better performance of 69.10%/78.38% on the development set.",
"Figure UID61 shows the EM/F1 scores of different question types while Figure UID62 is the question type amount distribution on the development set. In Figure UID61 we can see that the average EM/F1 of the \"when\" question is highest and those of the \"why\" question is the lowest. From Figure UID62 we can see the \"what\" question is the major class.",
"Figure 5 shows the composition of F1 score. Take our best model as an example, we observed a 78.38% F1 score on the whole development set, which can be separated into two parts: one is where F1 score equals to 100%, which means an exact match. This part accounts for 69.10% of the entire development set. And the other part accounts for 30.90%, of which the average F1 score is 30.03%. For the latter, we can further divide it into two sub-parts: one is where the F1 score equals to 0%, which means that predict answer is totally wrong. This part occupies 14.89% of the total development set. The other part accounts for 16.01% of the development set, of which average F1 score is 57.96%. From this analysis we can see that reducing the zero F1 score (14.89%) is potentially an important direction to further improve the system."
],
[
"Closely modelling questions could be of importance for question answering and machine reading. In this paper, we introduce syntactic information to help encode questions in neural networks. We view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results over a competitive baseline."
]
]
} | {
"question": [
"What MC abbreviate for?",
"how much of improvement the adaptation model can get?",
"what is the architecture of the baseline model?",
"What is the exact performance on SQUAD?"
],
"question_id": [
"a891039441e008f1fd0a227dbed003f76c140737",
"73738e42d488b32c9db89ac8adefc75403fa2653",
"6c8bd7fa1cfb1b2bbeb011cc9c712dceac0c8f06",
"fa218b297d9cdcae238cef71096752ce27ca8f4a"
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"topic_background": [
"research",
"research",
"research",
"familiar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"Question Answering",
"question",
"question",
"question"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "machine comprehension",
"evidence": [
"Enabling computers to understand given documents and answer questions about their content has recently attracted intensive interest, including but not limited to the efforts as in BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Many specific problems such as machine comprehension and question answering often involve modeling such question-document pairs."
],
"highlighted_evidence": [
"machine comprehension ",
"Nelufar "
]
}
],
"annotation_id": [
"3723cd0588687070d28ed836a630db0991b52dd6"
],
"worker_id": [
"3f8a5651de2844ab9fc75f8b2d1302e3734fe09e"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" 69.10%/78.38%"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Table 2 shows the ablation performances of various Q-code on the development set. Note that since the testset is hidden from us, we can only perform such an analysis on the development set. Our baseline model using no Q-code achieved a 68.00% and 77.36% EM and F1 scores, respectively. When we added the explicit question type T-code into the baseline model, the performance was improved slightly to 68.16%(EM) and 77.58%(F1). We then used TreeLSTM introduce syntactic parses for question representation and understanding (replacing simple question type as question understanding Q-code), which consistently shows further improvement. We further incorporated the soft adaptation. When letting the number of hidden question types ( $K$ ) to be 20, the performance improves to 68.73%/77.74% on EM and F1, respectively, which corresponds to the results of our model reported in Table 1 . Furthermore, after submitted our result, we have experimented with a large value of $K$ and found that when $K=100$ , we can achieve a better performance of 69.10%/78.38% on the development set."
],
"highlighted_evidence": [
"69.10%/78.38%"
]
}
],
"annotation_id": [
"22e620e1d1e5c7127bb207c662d72eeef7dec0b8"
],
"worker_id": [
"3f8a5651de2844ab9fc75f8b2d1302e3734fe09e"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"word embedding, input encoder, alignment, aggregation, and prediction."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction. Below we discuss these components in more details."
],
"highlighted_evidence": [
"word embedding, input encoder, alignment, aggregation, and prediction"
]
},
{
"unanswerable": false,
"extractive_spans": [
"Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction. Below we discuss these components in more details.",
"FLOAT SELECTED: Figure 1: A high level view of our basic model."
],
"highlighted_evidence": [
"Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction.",
"FLOAT SELECTED: Figure 1: A high level view of our basic model."
]
}
],
"annotation_id": [
"4c643ea11954f316a7a4a134ac5286bb1052fe50",
"e2961f8a8e69dcf43d6fa98994b67ea4ccda3d7d"
],
"worker_id": [
"3f8a5651de2844ab9fc75f8b2d1302e3734fe09e",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Our model achieves a 68.73% EM score and 77.39% F1 score"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Table 1 shows the official leaderboard on SQuAD test set when we submitted our system. Our model achieves a 68.73% EM score and 77.39% F1 score, which is ranked among the state of the art single models (without model ensembling)."
],
"highlighted_evidence": [
"Table 1 shows the official leaderboard on SQuAD test set when we submitted our system. Our model achieves a 68.73% EM score and 77.39% F1 score, which is ranked among the state of the art single models (without model ensembling)."
]
}
],
"annotation_id": [
"41b32d83e097277737f7518cfc7c86a52c7bb2e6"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
]
} | {
"caption": [
"Figure 1: A high level view of our basic model.",
"Figure 2: The inference layer implemented with a residual network.",
"Figure 3: The discriminative block for question discrimination and adaptation.",
"Table 1: The official leaderboard of single models on SQuAD test set as we submitted our systems (January 20, 2017).",
"Table 2: Performance of various Q-code on the development set.",
"Figure 4: Question Type Analysis",
"Figure 5: F1 Score Analysis."
],
"file": [
"2-Figure1-1.png",
"4-Figure2-1.png",
"6-Figure3-1.png",
"8-Table1-1.png",
"8-Table2-1.png",
"9-Figure4-1.png",
"9-Figure5-1.png"
]
} |
1909.00578 | SUM-QE: a BERT-based Summary Quality Estimation Model | We propose SumQE, a novel Quality Estimation model for summarization based on BERT. The model addresses linguistic quality aspects that are only indirectly captured by content-based approaches to summary evaluation, without involving comparison with human references. SumQE achieves very high correlations with human ratings, outperforming simpler models addressing these linguistic aspects. Predictions of the SumQE model can be used for system development, and to inform users of the quality of automatically produced summaries and other types of generated text. | {
"section_name": [
"Introduction",
"Related Work",
"Datasets",
"Methods ::: The Sum-QE Model",
"Methods ::: The Sum-QE Model ::: Single-task (BERT-FT-S-1):",
"Methods ::: The Sum-QE Model ::: Multi-task with one regressor (BERT-FT-M-1):",
"Methods ::: The Sum-QE Model ::: Multi-task with 5 regressors (BERT-FT-M-5):",
"Methods ::: Baselines ::: BiGRU s with attention:",
"Methods ::: Baselines ::: ROUGE:",
"Methods ::: Baselines ::: Language model (LM):",
"Methods ::: Baselines ::: Next sentence prediction:",
"Experiments",
"Results",
"Conclusion and Future Work",
"Acknowledgments"
],
"paragraphs": [
[
"Quality Estimation (QE) is a term used in machine translation (MT) to refer to methods that measure the quality of automatically translated text without relying on human references BIBREF0, BIBREF1. In this study, we address QE for summarization. Our proposed model, Sum-QE, successfully predicts linguistic qualities of summaries that traditional evaluation metrics fail to capture BIBREF2, BIBREF3, BIBREF4, BIBREF5. Sum-QE predictions can be used for system development, to inform users of the quality of automatically produced summaries and other types of generated text, and to select the best among summaries output by multiple systems.",
"Sum-QE relies on the BERT language representation model BIBREF6. We use a pre-trained BERT model adding just a task-specific layer, and fine-tune the entire model on the task of predicting linguistic quality scores manually assigned to summaries. The five criteria addressed are given in Figure FIGREF2. We provide a thorough evaluation on three publicly available summarization datasets from NIST shared tasks, and compare the performance of our model to a wide variety of baseline methods capturing different aspects of linguistic quality. Sum-QE achieves very high correlations with human ratings, showing the ability of BERT to model linguistic qualities that relate to both text content and form."
],
[
"Summarization evaluation metrics like Pyramid BIBREF5 and ROUGE BIBREF3, BIBREF2 are recall-oriented; they basically measure the content from a model (reference) summary that is preserved in peer (system generated) summaries. Pyramid requires substantial human effort, even in its more recent versions that involve the use of word embeddings BIBREF8 and a lightweight crowdsourcing scheme BIBREF9. ROUGE is the most commonly used evaluation metric BIBREF10, BIBREF11, BIBREF12. Inspired by BLEU BIBREF4, it relies on common $n$-grams or subsequences between peer and model summaries. Many ROUGE versions are available, but it remains hard to decide which one to use BIBREF13. Being recall-based, ROUGE correlates well with Pyramid but poorly with linguistic qualities of summaries. BIBREF14 proposed a regression model for measuring summary quality without references. The scores of their model correlate well with Pyramid and Responsiveness, but text quality is only addressed indirectly.",
"Quality Estimation is well established in MT BIBREF15, BIBREF0, BIBREF1, BIBREF16, BIBREF17. QE methods provide a quality indicator for translation output at run-time without relying on human references, typically needed by MT evaluation metrics BIBREF4, BIBREF18. QE models for MT make use of large post-edited datasets, and apply machine learning methods to predict post-editing effort scores and quality (good/bad) labels.",
"We apply QE to summarization, focusing on linguistic qualities that reflect the readability and fluency of the generated texts. Since no post-edited datasets – like the ones used in MT – are available for summarization, we use instead the ratings assigned by human annotators with respect to a set of linguistic quality criteria. Our proposed models achieve high correlation with human judgments, showing that it is possible to estimate summary quality without human references."
],
[
"We use datasets from the NIST DUC-05, DUC-06 and DUC-07 shared tasks BIBREF7, BIBREF19, BIBREF20. Given a question and a cluster of newswire documents, the contestants were asked to generate a 250-word summary answering the question. DUC-05 contains 1,600 summaries (50 questions x 32 systems); in DUC-06, 1,750 summaries are included (50 questions x 35 systems); and DUC-07 has 1,440 summaries (45 questions x 32 systems).",
"The submitted summaries were manually evaluated in terms of content preservation using the Pyramid score, and according to five linguistic quality criteria ($\\mathcal {Q}1, \\dots , \\mathcal {Q}5$), described in Figure FIGREF2, that do not involve comparison with a model summary. Annotators assigned scores on a five-point scale, with 1 and 5 indicating that the summary is bad or good with respect to a specific $\\mathcal {Q}$. The overall score for a contestant with respect to a specific $\\mathcal {Q}$ is the average of the manual scores assigned to the summaries generated by the contestant. Note that the DUC-04 shared task involved seven $\\mathcal {Q}$s, but some of them were found to be highly overlapping and were grouped into five in subsequent years BIBREF20. We address these five criteria and use DUC data from 2005 onwards in our experiments."
],
[
"In Sum-QE, each peer summary is converted into a sequence of token embeddings, consumed by an encoder $\\mathcal {E}$ to produce a (dense vector) summary representation $h$. Then, a regressor $\\mathcal {R}$ predicts a quality score $S_{\\mathcal {Q}}$ as an affine transformation of $h$:",
"Non-linear regression could also be used, but a linear (affine) $\\mathcal {R}$ already performs well. We use BERT as our main encoder and fine-tune it in three ways, which leads to three versions of Sum-QE."
],
[
"The first version of Sum-QE uses five separate estimators, one per quality score, each having its own encoder $\\mathcal {E}_i$ (a separate BERT instance generating $h_i$) and regressor $\\mathcal {R}_i$ (a separate linear regression layer on top of the corresponding BERT instance):"
],
[
"The second version of Sum-QE uses one estimator to predict all five quality scores at once, from a single encoding $h$ of the summary, produced by a single BERT instance. The intuition is that $\\mathcal {E}$ will learn to create richer representations so that $\\mathcal {R}$ (an affine transformation of $h$ with 5 outputs) will be able to predict all quality scores:",
"where $\\mathcal {R}(h)[i]$ is the $i$-th element of the vector returned by $\\mathcal {R}$."
],
[
"The third version of Sum-QE is similar to BERT-FT-M-1, but we now use five different linear (affine) regressors, one per quality score:",
"Although BERT-FT-M-5 is mathematically equivalent to BERT-FT-M-1, in practice these two versions of Sum-QE produce different results because of implementation details related to how the losses of the regressors (five or one) are combined."
],
[
"This is very similar to Sum-QE but now $\\mathcal {E}$ is a stack of BiGRU s with self-attention BIBREF21, instead of a BERT instance. The final summary representation ($h$) is the sum of the resulting context-aware token embeddings ($h = \\sum _i a_i h_i$) weighted by their self-attention scores ($a_i$). We again have three flavors: one single-task (BiGRU-ATT-S-1) and two multi-task (BiGRU-ATT-M-1 and BiGRU-ATT-M-5)."
],
[
"This baseline is the ROUGE version that performs best on each dataset, among the versions considered by BIBREF13. Although ROUGE focuses on surface similarities between peer and reference summaries, we would expect properties like grammaticality, referential clarity and coherence to be captured to some extent by ROUGE versions based on long $n$-grams or longest common subsequences."
],
[
"For a peer summary, a reasonable estimate of $\\mathcal {Q}1$ (Grammaticality) is the perplexity returned by a pre-trained language model. We experiment with the pre-trained GPT-2 model BIBREF22, and with the probability estimates that BERT can produce for each token when the token is treated as masked (BERT-FR-LM). Given that the grammaticality of a summary can be corrupted by just a few bad tokens, we compute the perplexity by considering only the $k$ worst (lowest LM probability) tokens of the peer summary, where $k$ is a tuned hyper-parameter."
],
[
"BERT training relies on two tasks: predicting masked tokens and next sentence prediction. The latter seems to be aligned with the definitions of $\\mathcal {Q}3$ (Referential Clarity), $\\mathcal {Q}4$ (Focus) and $\\mathcal {Q}5$ (Structure & Coherence). Intuitively, when a sentence follows another with high probability, it should involve clear referential expressions and preserve the focus and local coherence of the text. We, therefore, use a pre-trained BERT model (BERT-FR-NS) to calculate the sentence-level perplexity of each summary:",
"where $p(s_i|s_{i-1})$ is the probability that BERT assigns to the sequence of sentences $\\left< s_{i-1}, s \\right>$, and $n$ is the number of sentences in the peer summary."
],
[
"To evaluate our methods for a particular $\\mathcal {Q}$, we calculate the average of the predicted scores for the summaries of each particular contestant, and the average of the corresponding manual scores assigned to the contestant's summaries. We measure the correlation between the two (predicted vs. manual) across all contestants using Spearman's $\\rho $, Kendall's $\\tau $ and Pearson's $r$.",
"We train and test the Sum-QE and BiGRU-ATT versions using a 3-fold procedure. In each fold, we train on two datasets (e.g., DUC-05, DUC-06) and test on the third (e.g., DUC-07). We follow the same procedure with the three BiGRU-based models. Hyper-perameters are tuned on a held out subset from the training set of each fold."
],
[
"Table TABREF23 shows Spearman's $\\rho $, Kendall's $\\tau $ and Pearson's $r$ for all datasets and models. The three fine-tuned BERT versions clearly outperform all other methods. Multi-task versions seem to perform better than single-task ones in most cases. Especially for $\\mathcal {Q}4$ and $\\mathcal {Q}5$, which are highly correlated, the multi-task BERT versions achieve the best overall results. BiGRU-ATT also benefits from multi-task learning.",
"The correlation of Sum-QE with human judgments is high or very high BIBREF23 for all $\\mathcal {Q}$s in all datasets, apart from $\\mathcal {Q}2$ in DUC-05 where it is only moderate. Manual scores for $\\mathcal {Q}2$ in DUC-05 are the highest among all $\\mathcal {Q}$s and years (between 4 and 5) and with the smallest standard deviation, as shown in Table TABREF24. Differences among systems are thus small in this respect, and although Sum-QE predicts scores in this range, it struggles to put them in the correct order, as illustrated in Figure FIGREF26.",
"BEST-ROUGE has a negative correlation with the ground-truth scores for $\\mathcal {Q}$2 since it does not account for repetitions. The BiGRU-based models also reach their lowest performance on $\\mathcal {Q}$2 in DUC-05. A possible reason for the higher relative performance of the BERT-based models, which achieve a moderate positive correlation, is that BiGRU captures long-distance relations less effectively than BERT, which utilizes Transformers BIBREF24 and has a larger receptive field. A possible improvement would be a stacked BiGRU, since the states of higher stack layers have a larger receptive field as well.",
"The BERT multi-task versions perform better with highly correlated qualities like $\\mathcal {Q}4$ and $\\mathcal {Q}5$ (as illustrated in Figures 2 to 4 in the supplementary material). However, there is not a clear winner among them. Mathematical equivalence does not lead to deterministic results, especially when random initialization and stochastic learning algorithms are involved. An in-depth exploration of this point would involve further investigation, which will be part of future work."
],
[
"We propose a novel Quality Estimation model for summarization which does not require human references to estimate the quality of automatically produced summaries. Sum-QE successfully predicts qualitative aspects of summaries that recall-oriented evaluation metrics fail to approximate. Leveraging powerful BERT representations, it achieves high correlations with human scores for most linguistic qualities rated, on three different datasets. Future work involves extending the Sum-QE model to capture content-related aspects, either in combination with existing evaluation metrics (like Pyramid and ROUGE) or, preferably, by identifying important information in the original text and modelling its preservation in the proposed summaries. This would preserve Sum-QE's independence from human references, a property of central importance in real-life usage scenarios and system development settings.",
"The datasets used in our experiments come from the NIST DUC shared tasks which comprise newswire articles. We believe that Sum-QE could be easily applied to other domains. A small amount of annotated data would be needed for fine-tuning – especially in domains with specialized vocabulary (e.g., biomedical) – but the model could also be used out of the box. A concrete estimation of performance in this setting will be part of future work. Also, the model could serve to estimate linguistic qualities other than the ones in the DUC dataset with mininum effort.",
"Finally, Sum-QE could serve to assess the quality of other types of texts, not only summaries. It could thus be applied to other text generation tasks, such as natural language generation and sentence compression."
],
[
"We would like to thank the anonymous reviewers for their helpful feedback on this work. The work has been partly supported by the Research Center of the Athens University of Economics and Business, and by the French National Research Agency under project ANR-16-CE33-0013."
]
]
} | {
"question": [
"What are their correlation results?",
"What dataset do they use?",
"What simpler models do they look at?",
"What linguistic quality aspects are addressed?"
],
"question_id": [
"ff28d34d1aaa57e7ad553dba09fc924dc21dd728",
"ae8354e67978b7c333094c36bf9d561ca0c2d286",
"02348ab62957cb82067c589769c14d798b1ceec7",
"3748787379b3a7d222c3a6254def3f5bfb93a60e"
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "High correlation results range from 0.472 to 0.936",
"evidence": [
"FLOAT SELECTED: Table 1: Spearman’s ρ, Kendall’s τ and Pearson’s r correlations on DUC-05, DUC-06 and DUC-07 for Q1–Q5. BEST-ROUGE refers to the version that achieved best correlations and is different across years."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Spearman’s ρ, Kendall’s τ and Pearson’s r correlations on DUC-05, DUC-06 and DUC-07 for Q1–Q5. BEST-ROUGE refers to the version that achieved best correlations and is different across years."
]
}
],
"annotation_id": [
"8498b608303a9387fdac2f1ac707b9a33a37fd3a"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"datasets from the NIST DUC-05, DUC-06 and DUC-07 shared tasks"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We use datasets from the NIST DUC-05, DUC-06 and DUC-07 shared tasks BIBREF7, BIBREF19, BIBREF20. Given a question and a cluster of newswire documents, the contestants were asked to generate a 250-word summary answering the question. DUC-05 contains 1,600 summaries (50 questions x 32 systems); in DUC-06, 1,750 summaries are included (50 questions x 35 systems); and DUC-07 has 1,440 summaries (45 questions x 32 systems)."
],
"highlighted_evidence": [
"We use datasets from the NIST DUC-05, DUC-06 and DUC-07 shared tasks BIBREF7, BIBREF19, BIBREF20. Given a question and a cluster of newswire documents, the contestants were asked to generate a 250-word summary answering the question. DUC-05 contains 1,600 summaries (50 questions x 32 systems); in DUC-06, 1,750 summaries are included (50 questions x 35 systems); and DUC-07 has 1,440 summaries (45 questions x 32 systems)."
]
}
],
"annotation_id": [
"2e17f86d69a8f8863a117ba13065509831282ea0"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"BiGRU s with attention",
"ROUGE",
"Language model (LM)",
"Next sentence prediction"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Methods ::: Baselines ::: BiGRU s with attention:",
"This is very similar to Sum-QE but now $\\mathcal {E}$ is a stack of BiGRU s with self-attention BIBREF21, instead of a BERT instance. The final summary representation ($h$) is the sum of the resulting context-aware token embeddings ($h = \\sum _i a_i h_i$) weighted by their self-attention scores ($a_i$). We again have three flavors: one single-task (BiGRU-ATT-S-1) and two multi-task (BiGRU-ATT-M-1 and BiGRU-ATT-M-5).",
"Methods ::: Baselines ::: ROUGE:",
"This baseline is the ROUGE version that performs best on each dataset, among the versions considered by BIBREF13. Although ROUGE focuses on surface similarities between peer and reference summaries, we would expect properties like grammaticality, referential clarity and coherence to be captured to some extent by ROUGE versions based on long $n$-grams or longest common subsequences.",
"Methods ::: Baselines ::: Language model (LM):",
"For a peer summary, a reasonable estimate of $\\mathcal {Q}1$ (Grammaticality) is the perplexity returned by a pre-trained language model. We experiment with the pre-trained GPT-2 model BIBREF22, and with the probability estimates that BERT can produce for each token when the token is treated as masked (BERT-FR-LM). Given that the grammaticality of a summary can be corrupted by just a few bad tokens, we compute the perplexity by considering only the $k$ worst (lowest LM probability) tokens of the peer summary, where $k$ is a tuned hyper-parameter.",
"Methods ::: Baselines ::: Next sentence prediction:",
"BERT training relies on two tasks: predicting masked tokens and next sentence prediction. The latter seems to be aligned with the definitions of $\\mathcal {Q}3$ (Referential Clarity), $\\mathcal {Q}4$ (Focus) and $\\mathcal {Q}5$ (Structure & Coherence). Intuitively, when a sentence follows another with high probability, it should involve clear referential expressions and preserve the focus and local coherence of the text. We, therefore, use a pre-trained BERT model (BERT-FR-NS) to calculate the sentence-level perplexity of each summary:",
"where $p(s_i|s_{i-1})$ is the probability that BERT assigns to the sequence of sentences $\\left< s_{i-1}, s \\right>$, and $n$ is the number of sentences in the peer summary."
],
"highlighted_evidence": [
"Methods ::: Baselines ::: BiGRU s with attention:\nThis is very similar to Sum-QE but now $\\mathcal {E}$ is a stack of BiGRU s with self-attention BIBREF21, instead of a BERT instance. The final summary representation ($h$) is the sum of the resulting context-aware token embeddings ($h = \\sum _i a_i h_i$) weighted by their self-attention scores ($a_i$). We again have three flavors: one single-task (BiGRU-ATT-S-1) and two multi-task (BiGRU-ATT-M-1 and BiGRU-ATT-M-5).\n\nMethods ::: Baselines ::: ROUGE:\nThis baseline is the ROUGE version that performs best on each dataset, among the versions considered by BIBREF13. Although ROUGE focuses on surface similarities between peer and reference summaries, we would expect properties like grammaticality, referential clarity and coherence to be captured to some extent by ROUGE versions based on long $n$-grams or longest common subsequences.\n\nMethods ::: Baselines ::: Language model (LM):\nFor a peer summary, a reasonable estimate of $\\mathcal {Q}1$ (Grammaticality) is the perplexity returned by a pre-trained language model. We experiment with the pre-trained GPT-2 model BIBREF22, and with the probability estimates that BERT can produce for each token when the token is treated as masked (BERT-FR-LM). Given that the grammaticality of a summary can be corrupted by just a few bad tokens, we compute the perplexity by considering only the $k$ worst (lowest LM probability) tokens of the peer summary, where $k$ is a tuned hyper-parameter.\n\nMethods ::: Baselines ::: Next sentence prediction:\nBERT training relies on two tasks: predicting masked tokens and next sentence prediction. The latter seems to be aligned with the definitions of $\\mathcal {Q}3$ (Referential Clarity), $\\mathcal {Q}4$ (Focus) and $\\mathcal {Q}5$ (Structure & Coherence). Intuitively, when a sentence follows another with high probability, it should involve clear referential expressions and preserve the focus and local coherence of the text. We, therefore, use a pre-trained BERT model (BERT-FR-NS) to calculate the sentence-level perplexity of each summary:\n\nwhere $p(s_i|s_{i-1})$ is the probability that BERT assigns to the sequence of sentences $\\left< s_{i-1}, s \\right>$, and $n$ is the number of sentences in the peer summary."
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "BiGRUs with attention, ROUGE, Language model, and next sentence prediction ",
"evidence": [
"Methods ::: Baselines ::: BiGRU s with attention:",
"This is very similar to Sum-QE but now $\\mathcal {E}$ is a stack of BiGRU s with self-attention BIBREF21, instead of a BERT instance. The final summary representation ($h$) is the sum of the resulting context-aware token embeddings ($h = \\sum _i a_i h_i$) weighted by their self-attention scores ($a_i$). We again have three flavors: one single-task (BiGRU-ATT-S-1) and two multi-task (BiGRU-ATT-M-1 and BiGRU-ATT-M-5).",
"Methods ::: Baselines ::: ROUGE:",
"This baseline is the ROUGE version that performs best on each dataset, among the versions considered by BIBREF13. Although ROUGE focuses on surface similarities between peer and reference summaries, we would expect properties like grammaticality, referential clarity and coherence to be captured to some extent by ROUGE versions based on long $n$-grams or longest common subsequences.",
"Methods ::: Baselines ::: Language model (LM):",
"For a peer summary, a reasonable estimate of $\\mathcal {Q}1$ (Grammaticality) is the perplexity returned by a pre-trained language model. We experiment with the pre-trained GPT-2 model BIBREF22, and with the probability estimates that BERT can produce for each token when the token is treated as masked (BERT-FR-LM). Given that the grammaticality of a summary can be corrupted by just a few bad tokens, we compute the perplexity by considering only the $k$ worst (lowest LM probability) tokens of the peer summary, where $k$ is a tuned hyper-parameter.",
"Methods ::: Baselines ::: Next sentence prediction:",
"BERT training relies on two tasks: predicting masked tokens and next sentence prediction. The latter seems to be aligned with the definitions of $\\mathcal {Q}3$ (Referential Clarity), $\\mathcal {Q}4$ (Focus) and $\\mathcal {Q}5$ (Structure & Coherence). Intuitively, when a sentence follows another with high probability, it should involve clear referential expressions and preserve the focus and local coherence of the text. We, therefore, use a pre-trained BERT model (BERT-FR-NS) to calculate the sentence-level perplexity of each summary:",
"where $p(s_i|s_{i-1})$ is the probability that BERT assigns to the sequence of sentences $\\left< s_{i-1}, s \\right>$, and $n$ is the number of sentences in the peer summary."
],
"highlighted_evidence": [
"Methods ::: Baselines ::: BiGRU s with attention:\nThis is very similar to Sum-QE but now $\\mathcal {E}$ is a stack of BiGRU s with self-attention BIBREF21, instead of a BERT instance. The final summary representation ($h$) is the sum of the resulting context-aware token embeddings ($h = \\sum _i a_i h_i$) weighted by their self-attention scores ($a_i$). We again have three flavors: one single-task (BiGRU-ATT-S-1) and two multi-task (BiGRU-ATT-M-1 and BiGRU-ATT-M-5).\n\nMethods ::: Baselines ::: ROUGE:\nThis baseline is the ROUGE version that performs best on each dataset, among the versions considered by BIBREF13. Although ROUGE focuses on surface similarities between peer and reference summaries, we would expect properties like grammaticality, referential clarity and coherence to be captured to some extent by ROUGE versions based on long $n$-grams or longest common subsequences.\n\nMethods ::: Baselines ::: Language model (LM):\nFor a peer summary, a reasonable estimate of $\\mathcal {Q}1$ (Grammaticality) is the perplexity returned by a pre-trained language model. We experiment with the pre-trained GPT-2 model BIBREF22, and with the probability estimates that BERT can produce for each token when the token is treated as masked (BERT-FR-LM). Given that the grammaticality of a summary can be corrupted by just a few bad tokens, we compute the perplexity by considering only the $k$ worst (lowest LM probability) tokens of the peer summary, where $k$ is a tuned hyper-parameter.\n\nMethods ::: Baselines ::: Next sentence prediction:\nBERT training relies on two tasks: predicting masked tokens and next sentence prediction. The latter seems to be aligned with the definitions of $\\mathcal {Q}3$ (Referential Clarity), $\\mathcal {Q}4$ (Focus) and $\\mathcal {Q}5$ (Structure & Coherence). Intuitively, when a sentence follows another with high probability, it should involve clear referential expressions and preserve the focus and local coherence of the text. We, therefore, use a pre-trained BERT model (BERT-FR-NS) to calculate the sentence-level perplexity of each summary:\n\nwhere $p(s_i|s_{i-1})$ is the probability that BERT assigns to the sequence of sentences $\\left< s_{i-1}, s \\right>$, and $n$ is the number of sentences in the peer summary."
]
}
],
"annotation_id": [
"2b5d1eaaa4c82c9192fe7605e823228ecbb0f67b",
"aa8614bc1fc6b2d85516f91f8ae65b4aab7542e1"
],
"worker_id": [
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Grammaticality, non-redundancy, referential clarity, focus, structure & coherence",
"evidence": [
"FLOAT SELECTED: Figure 1: SUM-QE rates summaries with respect to five linguistic qualities (Dang, 2006a). The datasets we use for tuning and evaluation contain human assigned scores (from 1 to 5) for each of these categories."
],
"highlighted_evidence": [
"FLOAT SELECTED: Figure 1: SUM-QE rates summaries with respect to five linguistic qualities (Dang, 2006a). The datasets we use for tuning and evaluation contain human assigned scores (from 1 to 5) for each of these categories."
]
}
],
"annotation_id": [
"d02df1fd3e9510f8fad08f27dd84562f9eb24662"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
]
} | {
"caption": [
"Figure 1: SUM-QE rates summaries with respect to five linguistic qualities (Dang, 2006a). The datasets we use for tuning and evaluation contain human assigned scores (from 1 to 5) for each of these categories.",
"Figure 2: Illustration of different flavors of the investigated neural QE methods. An encoder (E) converts the summary to a dense vector representation h. A regressor Ri predicts a quality score SQi using h. E is either a BiGRU with attention (BiGRU-ATT) or BERT (SUM-QE).R has three flavors, one single-task (a) and two multi-task (b, c).",
"Table 1: Spearman’s ρ, Kendall’s τ and Pearson’s r correlations on DUC-05, DUC-06 and DUC-07 for Q1–Q5. BEST-ROUGE refers to the version that achieved best correlations and is different across years.",
"Table 2: Mean manual scores (± standard deviation) for each Q across datasets. Q2 is the hardest to predict because it has the highest scores and the lowest standard deviation.",
"Figure 3: Comparison of the mean gold scores assigned for Q2 and Q3 to each of the 32 systems in the DUC05 dataset, and the corresponding scores predicted by SUM-QE. Scores range from 1 to 5. The systems are sorted in descending order according to the gold scores. SUM-QE makes more accurate predictions forQ2 than for Q3, but struggles to put the systems in the correct order."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"5-Figure3-1.png"
]
} |
1911.09419 | Learning Hierarchy-Aware Knowledge Graph Embeddings for Link Prediction | Knowledge graph embedding, which aims to represent entities and relations as low dimensional vectors (or matrices, tensors, etc.), has been shown to be a powerful technique for predicting missing links in knowledge graphs. Existing knowledge graph embedding models mainly focus on modeling relation patterns such as symmetry/antisymmetry, inversion, and composition. However, many existing approaches fail to model semantic hierarchies, which are common in real-world applications. To address this challenge, we propose a novel knowledge graph embedding model---namely, Hierarchy-Aware Knowledge Graph Embedding (HAKE)---which maps entities into the polar coordinate system. HAKE is inspired by the fact that concentric circles in the polar coordinate system can naturally reflect the hierarchy. Specifically, the radial coordinate aims to model entities at different levels of the hierarchy, and entities with smaller radii are expected to be at higher levels; the angular coordinate aims to distinguish entities at the same level of the hierarchy, and these entities are expected to have roughly the same radii but different angles. Experiments demonstrate that HAKE can effectively model the semantic hierarchies in knowledge graphs, and significantly outperforms existing state-of-the-art methods on benchmark datasets for the link prediction task. | {
"section_name": [
"Introduction",
"Related Work",
"Related Work ::: Model Category",
"Related Work ::: The Ways to Model Hierarchy Structures",
"The Proposed HAKE",
"The Proposed HAKE ::: Two Categories of Entities",
"The Proposed HAKE ::: Hierarchy-Aware Knowledge Graph Embedding",
"The Proposed HAKE ::: Loss Function",
"Experiments and Analysis",
"Experiments and Analysis ::: Experimental Settings",
"Experiments and Analysis ::: Main Results",
"Experiments and Analysis ::: Analysis on Relation Embeddings",
"Experiments and Analysis ::: Analysis on Entity Embeddings",
"Experiments and Analysis ::: Ablation Studies",
"Experiments and Analysis ::: Comparison with Other Related Work",
"Conclusion",
"Appendix",
"A. Analysis on Relation Patterns",
"B. Analysis on Negative Entity Embeddings",
"C. Analysis on Moduli of Entity Embeddings",
"D. More Results on Semantic Hierarchies"
],
"paragraphs": [
[
"Knowledge graphs are usually collections of factual triples—(head entity, relation, tail entity), which represent human knowledge in a structured way. In the past few years, we have witnessed the great achievement of knowledge graphs in many areas, such as natural language processing BIBREF0, question answering BIBREF1, and recommendation systems BIBREF2.",
"Although commonly used knowledge graphs contain billions of triples, they still suffer from the incompleteness problem that a lot of valid triples are missing, as it is impractical to find all valid triples manually. Therefore, knowledge graph completion, also known as link prediction in knowledge graphs, has attracted much attention recently. Link prediction aims to automatically predict missing links between entities based on known links. It is a challenging task as we not only need to predict whether there is a relation between two entities, but also need to determine which relation it is.",
"Inspired by word embeddings BIBREF3 that can well capture semantic meaning of words, researchers turn to distributed representations of knowledge graphs (aka, knowledge graph embeddings) to deal with the link prediction problem. Knowledge graph embeddings regard entities and relations as low dimensional vectors (or matrices, tensors), which can be stored and computed efficiently. Moreover, like in the case of word embeddings, knowledge graph embeddings can preserve the semantics and inherent structures of entities and relations. Therefore, other than the link prediction task, knowledge graph embeddings can also be used in various downstream tasks, such as triple classification BIBREF4, relation inference BIBREF5, and search personalization BIBREF6.",
"The success of existing knowledge graph embedding models heavily relies on their ability to model connectivity patterns of the relations, such as symmetry/antisymmetry, inversion, and composition BIBREF7. For example, TransE BIBREF8, which represent relations as translations, can model the inversion and composition patterns. DistMult BIBREF9, which models the three-way interactions between head entities, relations, and tail entities, can model the symmetry pattern. RotatE BIBREF7, which represents entities as points in a complex space and relations as rotations, can model relation patterns including symmetry/antisymmetry, inversion, and composition. However, many existing models fail to model semantic hierarchies in knowledge graphs.",
"Semantic hierarchy is a ubiquitous property in knowledge graphs. For instance, WordNet BIBREF10 contains the triple [arbor/cassia/palm, hypernym, tree], where “tree” is at a higher level than “arbor/cassia/palm” in the hierarchy. Freebase BIBREF11 contains the triple [England, /location/location/contains, Pontefract/Lancaster], where “Pontefract/Lancaster” is at a lower level than “England” in the hierarchy. Although there exists some work that takes the hierarchy structures into account BIBREF12, BIBREF13, they usually require additional data or process to obtain the hierarchy information. Therefore, it is still challenging to find an approach that is capable of modeling the semantic hierarchy automatically and effectively.",
"In this paper, we propose a novel knowledge graph embedding model—namely, Hierarchy-Aware Knowledge Graph Embedding (HAKE). To model the semantic hierarchies, HAKE is expected to distinguish entities in two categories: (a) at different levels of the hierarchy; (b) at the same level of the hierarchy. Inspired by the fact that entities that have the hierarchical properties can be viewed as a tree, we can use the depth of a node (entity) to model different levels of the hierarchy. Thus, we use modulus information to model entities in the category (a), as the size of moduli can reflect the depth. Under the above settings, entities in the category (b) will have roughly the same modulus, which is hard to distinguish. Inspired by the fact that the points on the same circle can have different phases, we use phase information to model entities in the category (b). Combining the modulus and phase information, HAKE maps entities into the polar coordinate system, where the radial coordinate corresponds to the modulus information and the angular coordinate corresponds to the phase information. Experiments show that our proposed HAKE model can not only clearly distinguish the semantic hierarchies of entities, but also significantly and consistently outperform several state-of-the-art methods on the benchmark datasets.",
"",
"Notations Throughout this paper, we use lower-case letters $h$, $r$, and $t$ to represent head entities, relations, and tail entities, respectively. The triplet $(h,r,t)$ denotes a fact in knowledge graphs. The corresponding boldface lower-case letters $\\textbf {h}$, $\\textbf {r}$ and $\\textbf {t}$ denote the embeddings (vectors) of head entities, relations, and tail entities. The $i$-th entry of a vector $\\textbf {h}$ is denoted as $[\\textbf {h}]_i$. Let $k$ denote the embedding dimension.",
"Let $\\circ :\\mathbb {R}^n\\times \\mathbb {R}^n\\rightarrow \\mathbb {R}^n$ denote the Hadamard product between two vectors, that is,",
"and $\\Vert \\cdot \\Vert _1$, $\\Vert \\cdot \\Vert _2$ denote the $\\ell _1$ and $\\ell _2$ norm, respectively."
],
[
"In this section, we will describe the related work and the key differences between them and our work in two aspects—the model category and the way to model hierarchy structures in knowledge graphs."
],
[
"Roughly speaking, we can divide knowledge graph embedding models into three categories—translational distance models, bilinear models, and neural network based models. Table TABREF2 exhibits several popular models.",
"Translational distance models describe relations as translations from source entities to target entities. TransE BIBREF8 supposes that entities and relations satisfy $\\textbf {h}+\\textbf {r}\\approx \\textbf {t}$, where $\\textbf {h}, \\textbf {r}, \\textbf {t} \\in \\mathbb {R}^n$, and defines the corresponding score function as $f_r(\\textbf {h},\\textbf {t})=-\\Vert \\textbf {h}+\\textbf {r}-\\textbf {t}\\Vert _{1/2}$. However, TransE does not perform well on 1-N, N-1 and N-N relations BIBREF14. TransH BIBREF14 overcomes the many-to-many relation problem by allowing entities to have distinct representations given different relations. The score function is defined as $f_r(\\textbf {h},\\textbf {t})=-\\Vert \\textbf {h}_{\\perp }+\\textbf {r}-\\textbf {t}_{\\perp }\\Vert _2$, where $\\textbf {h}_{\\perp }$ and $\\textbf {t}_{\\perp }$ are the projections of entities onto relation-specific hyperplanes. ManifoldE BIBREF15 deals with many-to-many problems by relaxing the hypothesis $\\textbf {h}+\\textbf {r}\\approx \\textbf {t}$ to $\\Vert \\textbf {h}+\\textbf {r}-\\textbf {t}\\Vert _2^2\\approx \\theta _r^2$ for each valid triple. In this way, the candidate entities can lie on a manifold instead of exact point. The corresponding score function is defined as $f_r(\\textbf {h},\\textbf {t})=-(\\Vert \\textbf {h}+\\textbf {r}-\\textbf {t}\\Vert _2^2-\\theta _r^2)^2$. More recently, to better model symmetric and antisymmetric relations, RotatE BIBREF7 defines each relation as a rotation from source entities to target entities in a complex vector space. The score function is defined as $f_r(\\textbf {h},\\textbf {t})=-\\Vert \\textbf {h}\\circ \\textbf {r}-\\textbf {t}\\Vert _1$, where $\\textbf {h},\\textbf {r},\\textbf {t}\\in \\mathbb {C}^k$ and $|[\\textbf {r}]_i|=1$.",
"Bilinear models product-based score functions to match latent semantics of entities and relations embodied in their vector space representations. RESCAL BIBREF16 represents each relation as a full rank matrix, and defines the score function as $f_r(\\textbf {h},\\textbf {t})=\\textbf {h}^\\top \\textbf {M}_r \\textbf {t}$, which can also be seen as a bilinear function. As full rank matrices are prone to overfitting, recent works turn to make additional assumptions on $\\textbf {M}_r$. For example, DistMult BIBREF9 assumes $\\textbf {M}_r$ to be a diagonal matrix, and ANALOGY BIBREF19 supposes that $\\textbf {M}_r$ is normal. However, these simplified models are usually less expressive and not powerful enough for general knowledge graphs. Differently, ComplEx BIBREF17 extends DistMult by introducing complex-valued embeddings to better model asymmetric and inverse relations. HolE BIBREF20 combines the expressive power of RESCAL with the efficiency and simplicity of DistMult by using the circular correlation operation.",
"Neural network based models have received greater attention in recent years. For example, MLP BIBREF21 and NTN BIBREF22 use a fully connected neural network to determine the scores of given triples. ConvE BIBREF18 and ConvKB BIBREF23 employ convolutional neural networks to define score functions. Recently, graph convolutional networks are also introduced, as knowledge graphs obviously have graph structures BIBREF24.",
"Our proposed model HAKE belongs to the translational distance models. More specifically, HAKE shares similarities with RotatE BIBREF7, in which the authors claim that they use both modulus and phase information. However, there exist two major differences between RotatE and HAKE. Detailed differences are as follows.",
"The aims are different. RotatE aims to model the relation patterns including symmetry/antisymmetry, inversion, and composition. HAKE aims to model the semantic hierarchy, while it can also model all the relation patterns mentioned above.",
"The ways to use modulus information are different. RotatE models relations as rotations in the complex space, which encourages two linked entities to have the same modulus, no matter what the relation is. The different moduli in RotatE come from the inaccuracy in training. Instead, HAKE explicitly models the modulus information, which significantly outperforms RotatE in distinguishing entities at different levels of the hierarchy."
],
[
"Another related problem is how to model hierarchy structures in knowledge graphs. Some recent work considers the problem in different ways. BIBREF25 embed entities and categories jointly into a semantic space and designs models for the concept categorization and dataless hierarchical classification tasks. BIBREF13 use clustering algorithms to model the hierarchical relation structures. BIBREF12 proposed TKRL, which embeds the type information into knowledge graph embeddings. That is, TKRL requires additional hierarchical type information for entities.",
"Different from the previous work, our work",
"considers the link prediction task, which is a more common task for knowledge graph embeddings;",
"can automatically learn the semantic hierarchy in knowledge graphs without using clustering algorithms;",
"does not require any additional information other than the triples in knowledge graphs."
],
[
"In this section, we introduce our proposed model HAKE. We first introduce two categories of entities that reflect the semantic hierarchies in knowledge graphs. Afterwards, we introduce our proposed HAKE that can model entities in both of the categories."
],
[
"To model the semantic hierarchies of knowledge graphs, a knowledge graph embedding model must be capable of distinguishing entities in the following two categories.",
"Entities at different levels of the hierarchy. For example, “mammal” and “dog”, “run” and ”move”.",
"Entities at the same level of the hierarchy. For example, “rose” and “peony”, “truck” and ”lorry”."
],
[
"To model both of the above categories, we propose a hierarchy-aware knowledge graph embedding model—HAKE. HAKE consists of two parts—the modulus part and the phase part—which aim to model entities in the two different categories, respectively. Figure FIGREF13 gives an illustration of the proposed model.",
"To distinguish embeddings in the different parts, we use $\\textbf {e}_m$ ($\\textbf {e}$ can be $\\textbf {h}$ or $\\textbf {t}$) and $\\textbf {r}_m$ to denote the entity embedding and relation embedding in the modulus part, and use $\\textbf {e}_p$ ($\\textbf {e}$ can be $\\textbf {h}$ or $\\textbf {t}$) and $\\textbf {r}_p$ to denote the entity embedding and relation embedding in the phase part.",
"The modulus part aims to model the entities at different levels of the hierarchy. Inspired by the fact that entities that have hierarchical property can be viewed as a tree, we can use the depth of a node (entity) to model different levels of the hierarchy. Therefore, we use modulus information to model entities in the category (a), as moduli can reflect the depth in a tree. Specifically, we regard each entry of $\\textbf {h}_m$ and $\\textbf {t}_m$, that is, $[\\textbf {h}_m]_i$ and $[\\textbf {t}_m]_i$, as a modulus, and regard each entry of $\\textbf {r}_m$, that is, $[\\textbf {r}]_i$, as a scaling transformation between two moduli. We can formulate the modulus part as follows:",
"The corresponding distance function is:",
"Note that we allow the entries of entity embeddings to be negative but restrict the entries of relation embeddings to be positive. This is because that the signs of entity embeddings can help us to predict whether there exists a relation between two entities. For example, if there exists a relation $r$ between $h$ and $t_1$, and no relation between $h$ and $t_2$, then $(h, r, t_1)$ is a positive sample and $(h, r, t_2)$ is a negative sample. Our goal is to minimize $d_r(\\textbf {h}_m, \\textbf {t}_{1,m})$ and maximize $d_r(\\textbf {h}_m, \\textbf {t}_{2,m})$, so as to make a clear distinction between positive and negative samples. For the positive sample, $[\\textbf {h}]_i$ and $[\\textbf {t}_1]_i$ tend to share the same sign, as $[\\textbf {r}_m]_i>0$. For the negative sample, the signs of $[\\textbf {h}_m]_i$ and $[\\textbf {t}_{2,m}]_i$ can be different if we initialize their signs randomly. In this way, $d_r(\\textbf {h}_m, \\textbf {t}_{2,m})$ is more likely to be larger than $d_r(\\textbf {h}_m, \\textbf {t}_{1,m})$, which is exactly what we desire. We will validate this argument by experiments in Section 4 of the supplementary material.",
"Further, we can expect the entities at higher levels of the hierarchy to have smaller modulus, as these entities are more close to the root of the tree.",
"If we use only the modulus part to embed knowledge graphs, then the entities in the category (b) will have the same modulus. Moreover, suppose that $r$ is a relation that reflects the same semantic hierarchy, then $[\\textbf {r}]_i$ will tend to be one, as $h\\circ r\\circ r=h$ holds for all $h$. Hence, embeddings of the entities in the category (b) tend to be the same, which makes it hard to distinguish these entities. Therefore, a new module is required to model the entities in the category (b).",
"The phase part aims to model the entities at the same level of the semantic hierarchy. Inspired by the fact that points on the same circle (that is, have the same modulus) can have different phases, we use phase information to distinguish entities in the category (b). Specifically, we regard each entry of $\\textbf {h}_p$ and $\\textbf {t}_p$, that is, $[\\textbf {h}_p]_i$ and $[\\textbf {t}_p]_i$ as a phase, and regard each entry of $\\textbf {r}_p$, that is, $[\\textbf {r}_p]_i$, as a phase transformation. We can formulate the phase part as follows:",
"The corresponding distance function is:",
"where $\\sin (\\cdot )$ is an operation that applies the sine function to each element of the input. Note that we use a sine function to measure the distance between phases instead of using $\\Vert \\textbf {h}_p+\\textbf {r}_p-\\textbf {t}_p\\Vert _1$, as phases have periodic characteristic. This distance function shares the same formulation with that of pRotatE BIBREF7.",
"Combining the modulus part and the phase part, HAKE maps entities into the polar coordinate system, where the radial coordinate and the angular coordinates correspond to the modulus part and the phase part, respectively. That is, HAKE maps an entity $h$ to $[\\textbf {h}_m;\\textbf {h}_p]$, where $\\textbf {h}_m$ and $\\textbf {h}_p$ are generated by the modulus part and the phase part, respectively, and $[\\,\\cdot \\,; \\,\\cdot \\,]$ denotes the concatenation of two vectors. Obviously, $([\\textbf {h}_m]_i,[\\textbf {h}_p]_i)$ is a 2D point in the polar coordinate system. Specifically, we formulate HAKE as follows:",
"The distance function of HAKE is:",
"where $\\lambda \\in \\mathbb {R}$ is a parameter that learned by the model. The corresponding score function is",
"When two entities have the same moduli, then the modulus part $d_{r,m}(\\textbf {h}_m,\\textbf {t}_m)=0$. However, the phase part $d_{r,p}(\\textbf {h}_p,\\textbf {t}_p)$ can be very different. By combining the modulus part and the phase part, HAKE can model the entities in both the category (a) and the category (b). Therefore, HAKE can model semantic hierarchies of knowledge graphs.",
"When evaluating the models, we find that adding a mixture bias to $d_{r,m}(\\textbf {h},\\textbf {t})$ can help to improve the performance of HAKE. The modified $d_{r,m}(\\textbf {h},\\textbf {t})$ is given by:",
"where $0<\\textbf {r}^{\\prime }_m<1$ is a vector that have the same dimension with $\\textbf {r}_m$. Indeed, the above distance function is equivalent to",
"where $/$ denotes the element-wise division operation. If we let $\\textbf {r}_m\\leftarrow (1-\\textbf {r}_m^{\\prime })/(\\textbf {r}_m+\\textbf {r}_m^{\\prime })$, then the modified distance function is exactly the same as the original one when compare the distances of different entity pairs. For notation convenience, we still use $d_{r,m}(\\textbf {h},\\textbf {t})=\\Vert \\textbf {h}_m\\circ \\textbf {r}_m-\\textbf {t}_m\\Vert _2$ to represent the modulus part. We will conduct ablation studies on the bias in the experiment section."
],
[
"To train the model, we use the negative sampling loss functions with self-adversarial training BIBREF7:",
"where $\\gamma $ is a fixed margin, $\\sigma $ is the sigmoid function, and $(h^{\\prime }_i,r,t^{\\prime }_i)$ is the $i$th negative triple. Moreover,",
"is the probability distribution of sampling negative triples, where $\\alpha $ is the temperature of sampling."
],
[
"This section is organized as follows. First, we introduce the experimental settings in detail. Then, we show the effectiveness of our proposed model on three benchmark datasets. Finally, we analyze the embeddings generated by HAKE, and show the results of ablation studies. The code of HAKE is available on GitHub at https://github.com/MIRALab-USTC/KGE-HAKE."
],
[
"We evaluate our proposed models on three commonly used knowledge graph datasets—WN18RR BIBREF26, FB15k-237 BIBREF18, and YAGO3-10 BIBREF27. Details of these datasets are summarized in Table TABREF18.",
"WN18RR, FB15k-237, and YAGO3-10 are subsets of WN18 BIBREF8, FB15k BIBREF8, and YAGO3 BIBREF27, respectively. As pointed out by BIBREF26 and BIBREF18, WN18 and FB15k suffer from the test set leakage problem. One can attain the state-of-the-art results even using a simple rule based model. Therefore, we use WN18RR and FB15k-237 as the benchmark datasets.",
"Evaluation Protocol Following BIBREF8, for each triple $(h,r,t)$ in the test dataset, we replace either the head entity $h$ or the tail entity $t$ with each candidate entity to create a set of candidate triples. We then rank the candidate triples in descending order by their scores. It is worth noting that we use the “Filtered” setting as in BIBREF8, which does not take any existing valid triples into accounts at ranking. We choose Mean Reciprocal Rank (MRR) and Hits at N (H@N) as the evaluation metrics. Higher MRR or H@N indicate better performance.",
"Training Protocol We use Adam BIBREF28 as the optimizer, and use grid search to find the best hyperparameters based on the performance on the validation datasets. To make the model easier to train, we add an additional coefficient to the distance function, i.e., $d_{r}(\\textbf {h},\\textbf {t})=\\lambda _1d_{r,m}(\\textbf {h}_m,\\textbf {t}_m)+\\lambda _2 d_{r,p}(\\textbf {h}_p,\\textbf {t}_p)$, where $\\lambda _1,\\lambda _2\\in \\mathbb {R}$.",
"Baseline Model One may argue that the phase part is unnecessary, as we can distinguish entities in the category (b) by allowing $[\\textbf {r}]_i$ to be negative. We propose a model—ModE—that uses only the modulus part but allow $[\\textbf {r}]_i<0$. Specifically, the distance function of ModE is"
],
[
"In this part, we show the performance of our proposed models—HAKE and ModE—against existing state-of-the-art methods, including TransE BIBREF8, DistMult BIBREF9, ComplEx BIBREF17, ConvE BIBREF18, and RotatE BIBREF7.",
"Table TABREF19 shows the performance of HAKE, ModE, and several previous models. Our baseline model ModE shares similar simplicity with TransE, but significantly outperforms it on all datasets. Surprisingly, ModE even outperforms more complex models such as DistMult, ConvE and Complex on all datasets, and beats the state-of-the-art model—RotatE—on FB15k-237 and YAGO3-10 datasets, which demonstrates the great power of modulus information. Table TABREF19 also shows that our HAKE significantly outperforms existing state-of-the-art methods on all datasets.",
"WN18RR dataset consists of two kinds of relations: the symmetric relations such as $\\_similar\\_to$, which link entities in the category (b); other relations such as $\\_hypernym$ and $\\_member\\_meronym$, which link entities in the category (a). Actually, RotatE can model entities in the category (b) very well BIBREF7. However, HAKE gains a 0.021 higher MRR, a 2.4% higher H@1, and a 2.4% higher H@3 against RotatE, respectively. The superior performance of HAKE compared with RotatE implies that our proposed model can better model different levels in the hierarchy.",
"FB15k-237 dataset has more complex relation types and fewer entities, compared with WN18RR and YAGO3-10. Although there are relations that reflect hierarchy in FB15k-237, there are also lots of relations, such as “/location/location/time_zones” and “/film/film/prequel”, that do not lead to hierarchy. The characteristic of this dataset accounts for why our proposed models doesn't outperform the previous state-of-the-art as much as that of WN18RR and YAGO3-10 datasets. However, the results also show that our models can gain better performance so long as there exists semantic hierarchies in knowledge graphs. As almost all knowledge graphs have such hierarchy structures, our model is widely applicable.",
"YAGO3-10 datasets contains entities with high relation-specific indegree BIBREF18. For example, the link prediction task $(?, hasGender, male)$ has over 1000 true answers, which makes the task challenging. Fortunately, we can regard “male” as an entity at higher level of the hierarchy and the predicted head entities as entities at lower level. In this way, YAGO3-10 is a dataset that clearly has semantic hierarchy property, and we can expect that our proposed models is capable of working well on this dataset. Table TABREF19 validates our expectation. Both ModE and HAKE significantly outperform the previous state-of-the-art. Notably, HAKE gains a 0.050 higher MRR, 6.0% higher H@1 and 4.6% higher H@3 than RotatE, respectively."
],
[
"In this part, we first show that HAKE can effectively model the hierarchy structures by analyzing the moduli of relation embeddings. Then, we show that the phase part of HAKE can help us to distinguish entities at the same level of the hierarchy by analyzing the phases of relation embeddings.",
"In Figure FIGREF20, we plot the distribution histograms of moduli of six relations. These relations are drawn from WN18RR, FB15k-237, and YAGO3-10. Specifically, the relations in Figures FIGREF20a, FIGREF20c, FIGREF20e and FIGREF20f are drawn from WN18RR. The relation in Figure FIGREF20d is drawn from FB15k-237. The relation in Figure FIGREF20b is drawn from YAGO3-10. We divide the relations in Figure FIGREF20 into three groups.",
"Relations in Figures FIGREF20c and FIGREF20d connect the entities at the same level of the semantic hierarchy;",
"Relations in Figures FIGREF20a and FIGREF20b represent that tail entities are at higher levels than head entities of the hierarchy;",
"Relations in Figures FIGREF20e and FIGREF20f represent that tail entities are at lower levels than head entities of the hierarchy.",
"As described in the model description section, we expect entities at higher levels of the hierarchy to have small moduli. The experiments validate our expectation. For both ModE and HAKE, most entries of the relations in the group (A) take values around one, which leads to that the head entities and tail entities have approximately the same moduli. In the group (B), most entries of the relations take values less than one, which results in that the head entities have smaller moduli than the tail entities. The cases in the group (C) are contrary to that in the group (B). These results show that our model can capture the semantic hierarchies in knowledge graphs. Moreover, compared with ModE, the relation embeddings' moduli of HAKE have lower variances, which shows that HAKE can model hierarchies more clearly.",
"As mentioned above, relations in the group (A) reflect the same semantic hierarchy, and are expected to have the moduli of about one. Obviously, it is hard to distinguish entities linked by these relations only using the modulus part. In Figure FIGREF22, we plot the phases of the relations in the group (A). The results show that the entities at the same level of the hierarchy can be distinguished by their phases, as many phases have the values of $\\pi $."
],
[
"In this part, to further show that HAKE can capture the semantic hierarchies between entities, we visualize the embeddings of several entity pairs.",
"We plot the entity embeddings of two models: the previous state-of-the-art RotatE and our proposed HAKE. RotatE regards each entity as a group of complex numbers. As a complex number can be seen as a point on a 2D plane, we can plot the entity embeddings on a 2D plane. As for HAKE, we have mentioned that it maps entities into the polar coordinate system. Therefore, we can also plot the entity embeddings generated by HAKE on a 2D plane based on their polar coordinates. For a fair comparison, we set $k=500$. That is, each plot contains 500 points, and the actual dimension of entity embeddings is 1000. Note that we use the logarithmic scale to better display the differences between entity embeddings. As all the moduli have values less than one, after applying the logarithm operation, the larger radii in the figures will actually represent smaller modulus.",
"Figure FIGREF29 shows the visualization results of three triples from the WN18RR dataset. Compared with the tail entities, the head entities in Figures FIGREF29a, FIGREF29b, and FIGREF29c are at lower levels, similar levels, higher levels in the semantic hierarchy, respectively. We can see that there exist clear concentric circles in the visualization results of HAKE, which demonstrates that HAKE can effectively model the semantic hierarchies. However, in RotatE, the entity embeddings in all three subfigures are mixed, making it hard to distinguish entities at different levels in the hierarchy."
],
[
"In this part, we conduct ablation studies on the modulus part and the phase part of HAKE, as well as the mixture bias item. Table TABREF26 shows the results on three benchmark datasets.",
"We can see that the bias can improve the performance of HAKE on nearly all metrics. Specifically, the bias improves the H@1 score of $4.7\\%$ on YAGO3-10 dataset, which illustrates the effectiveness of the bias.",
"We also observe that the modulus part of HAKE does not perform well on all datasets, due to its inability to distinguish the entities at the same level of the hierarchy. When only using the phase part, HAKE degenerates to the pRotatE model BIBREF7. It performs better than the modulus part, because it can well model entities at the same level of the hierarchy. However, our HAKE model significantly outperforms the modulus part and the phase part on all datasets, which demonstrates the importance to combine the two parts for modeling semantic hierarchies in knowledge graphs."
],
[
"We compare our models with TKRL models BIBREF12, which also aim to model the hierarchy structures. For the difference between HAKE and TKRL, please refer to the Related Work section. Table TABREF27 shows the H@10 scores of HAKE and TKRLs on FB15k dataset. The best performance of TKRL is .734 obtained by the WHE+STC version, while the H@10 score of our HAKE model is .884. The results show that HAKE significantly outperforms TKRL, though it does not require additional information."
],
[
"To model the semantic hierarchies in knowledge graphs, we propose a novel hierarchy-aware knowledge graph embedding model—HAKE—which maps entities into the polar coordinate system. Experiments show that our proposed HAKE significantly outperforms several existing state-of-the-art methods on benchmark datasets for the link prediction task. A further investigation shows that HAKE is capable of modeling entities at both different levels and the same levels in the semantic hierarchies."
],
[
"In this appendix, we will provide analysis on relation patterns, negative entity embeddings, and moduli of entity embeddings. Then, we will give more visualization results on semantic hierarchies."
],
[
"In this section, we prove that our HAKE model can infer the (anti)symmetry, inversion and composition relation patterns. Detailed propositions and their proofs are as follows.",
"Proposition 1 HAKE can infer the (anti)symmetry pattern.",
"If $r(x, y)$ and $r(y, x)$ hold, we have",
"Then we have",
"Otherwise, if $r(x, y)$ and $\\lnot r(y, x)$ hold, we have",
"Proposition 2 HAKE can infer the inversion pattern.",
"If $r_1(x, y)$ and $r_2(y, x)$ hold, we have",
"Then, we have",
"",
"Proposition 3 HAKE can infer the composition pattern.",
"If $r_1(x, z)$, $r_2(x, y)$ and $r_3(y, z)$ hold, we have",
"Then we have"
],
[
"We denote the linked entity pairs as the set of entity pairs linked by some relation, and denote the unlinked entity pairs as the set of entity pairs that no triple contains in the train/valid/test dataset. It is worth noting that the unlinked paris may contain valid triples, as the knowledge graph is incomplete. For both the linked and the unlinked entity pairs, we count the embedding entries of two entities that have different signs. Figure FIGREF34 shows the result.",
"For the linked entity pairs, as we expected, most of the entries have the same sign. Due to the large amount of unlinked entity pairs, we randomly sample a part of them for plotting. For the unlinked entity pairs, around half of the entries have different signs, which is consistent with the random initialization. The results support our hypothesis that the negative signs of entity embeddings can help our model to distinguish positive and negative triples."
],
[
"Figure FIGREF37 shows the modulus of entity embeddings. We can observe that RotatE encourages the modulus of embeddings to be the same, as the relations are modeled as rotations in a complex space. Compared with RotatE, the modulus of entity embeddings in HAKE are more dispersed, making it to have more potential to model the semantic hierarchies."
],
[
"In this part, we visualize more triples from WN18RR. We plot the head and tail entities on 2D planes using the same method as that in the main text. The visualization results are in Figure FIGREF41, where the subcaptions demonstrate the corresponding triples. The figures show that, compared with RotatE, our HAKE model can better model the entities both in different hierarchies and in the same hierarchy.",
""
]
]
} | {
"question": [
"What benchmark datasets are used for the link prediction task?",
"What are state-of-the art models for this task?",
"How better does HAKE model peform than state-of-the-art methods?",
"How are entities mapped onto polar coordinate system?"
],
"question_id": [
"6852217163ea678f2009d4726cb6bd03cf6a8f78",
"cd1ad7e18d8eef8f67224ce47f3feec02718ea1a",
"9c9e90ceaba33242342a5ae7568e89fe660270d5",
"2a058f8f6bd6f8e80e8452e1dba9f8db5e3c7de8"
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"WN18RR",
"FB15k-237",
"YAGO3-10"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We evaluate our proposed models on three commonly used knowledge graph datasets—WN18RR BIBREF26, FB15k-237 BIBREF18, and YAGO3-10 BIBREF27. Details of these datasets are summarized in Table TABREF18."
],
"highlighted_evidence": [
"We evaluate our proposed models on three commonly used knowledge graph datasets—WN18RR BIBREF26, FB15k-237 BIBREF18, and YAGO3-10 BIBREF27."
]
},
{
"unanswerable": false,
"extractive_spans": [
"WN18RR BIBREF26",
"FB15k-237 BIBREF18",
"YAGO3-10 BIBREF27"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We evaluate our proposed models on three commonly used knowledge graph datasets—WN18RR BIBREF26, FB15k-237 BIBREF18, and YAGO3-10 BIBREF27. Details of these datasets are summarized in Table TABREF18.",
"WN18RR, FB15k-237, and YAGO3-10 are subsets of WN18 BIBREF8, FB15k BIBREF8, and YAGO3 BIBREF27, respectively. As pointed out by BIBREF26 and BIBREF18, WN18 and FB15k suffer from the test set leakage problem. One can attain the state-of-the-art results even using a simple rule based model. Therefore, we use WN18RR and FB15k-237 as the benchmark datasets."
],
"highlighted_evidence": [
"We evaluate our proposed models on three commonly used knowledge graph datasets—WN18RR BIBREF26, FB15k-237 BIBREF18, and YAGO3-10 BIBREF27. Details of these datasets are summarized in Table TABREF18.\n\nWN18RR, FB15k-237, and YAGO3-10 are subsets of WN18 BIBREF8, FB15k BIBREF8, and YAGO3 BIBREF27, respectively."
]
}
],
"annotation_id": [
"2d676a618b987bdcb3bb45a1c64b5d046d90bc97",
"430b7ed2af25af1665612a255640abd75caa9a31"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"TransE",
"DistMult",
"ComplEx",
"ConvE",
"RotatE"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In this part, we show the performance of our proposed models—HAKE and ModE—against existing state-of-the-art methods, including TransE BIBREF8, DistMult BIBREF9, ComplEx BIBREF17, ConvE BIBREF18, and RotatE BIBREF7."
],
"highlighted_evidence": [
"In this part, we show the performance of our proposed models—HAKE and ModE—against existing state-of-the-art methods, including TransE BIBREF8, DistMult BIBREF9, ComplEx BIBREF17, ConvE BIBREF18, and RotatE BIBREF7."
]
}
],
"annotation_id": [
"3dc899368cac5c9c6641bf3e3c9c29c92f9429d5"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"0.021 higher MRR, a 2.4% higher H@1, and a 2.4% higher H@3 against RotatE, respectively",
"doesn't outperform the previous state-of-the-art as much as that of WN18RR and YAGO3-10",
"HAKE gains a 0.050 higher MRR, 6.0% higher H@1 and 4.6% higher H@3 than RotatE, respectively"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"WN18RR dataset consists of two kinds of relations: the symmetric relations such as $\\_similar\\_to$, which link entities in the category (b); other relations such as $\\_hypernym$ and $\\_member\\_meronym$, which link entities in the category (a). Actually, RotatE can model entities in the category (b) very well BIBREF7. However, HAKE gains a 0.021 higher MRR, a 2.4% higher H@1, and a 2.4% higher H@3 against RotatE, respectively. The superior performance of HAKE compared with RotatE implies that our proposed model can better model different levels in the hierarchy.",
"FB15k-237 dataset has more complex relation types and fewer entities, compared with WN18RR and YAGO3-10. Although there are relations that reflect hierarchy in FB15k-237, there are also lots of relations, such as “/location/location/time_zones” and “/film/film/prequel”, that do not lead to hierarchy. The characteristic of this dataset accounts for why our proposed models doesn't outperform the previous state-of-the-art as much as that of WN18RR and YAGO3-10 datasets. However, the results also show that our models can gain better performance so long as there exists semantic hierarchies in knowledge graphs. As almost all knowledge graphs have such hierarchy structures, our model is widely applicable.",
"YAGO3-10 datasets contains entities with high relation-specific indegree BIBREF18. For example, the link prediction task $(?, hasGender, male)$ has over 1000 true answers, which makes the task challenging. Fortunately, we can regard “male” as an entity at higher level of the hierarchy and the predicted head entities as entities at lower level. In this way, YAGO3-10 is a dataset that clearly has semantic hierarchy property, and we can expect that our proposed models is capable of working well on this dataset. Table TABREF19 validates our expectation. Both ModE and HAKE significantly outperform the previous state-of-the-art. Notably, HAKE gains a 0.050 higher MRR, 6.0% higher H@1 and 4.6% higher H@3 than RotatE, respectively."
],
"highlighted_evidence": [
"WN18RR dataset consists of two kinds of relations: the symmetric relations such as $\\_similar\\_to$, which link entities in the category (b); other relations such as $\\_hypernym$ and $\\_member\\_meronym$, which link entities in the category (a). Actually, RotatE can model entities in the category (b) very well BIBREF7. However, HAKE gains a 0.021 higher MRR, a 2.4% higher H@1, and a 2.4% higher H@3 against RotatE, respectively.",
"FB15k-237 dataset has more complex relation types and fewer entities, compared with WN18RR and YAGO3-10. Although there are relations that reflect hierarchy in FB15k-237, there are also lots of relations, such as “/location/location/time_zones” and “/film/film/prequel”, that do not lead to hierarchy. The characteristic of this dataset accounts for why our proposed models doesn't outperform the previous state-of-the-art as much as that of WN18RR and YAGO3-10 datasets.",
"AGO3-10 datasets contains entities with high relation-specific indegree BIBREF18. For example, the link prediction task $(?, hasGender, male)$ has over 1000 true answers, which makes the task challenging. Fortunately, we can regard “male” as an entity at higher level of the hierarchy and the predicted head entities as entities at lower level. In this way, YAGO3-10 is a dataset that clearly has semantic hierarchy property, and we can expect that our proposed models is capable of working well on this dataset. Table TABREF19 validates our expectation. Both ModE and HAKE significantly outperform the previous state-of-the-art. Notably, HAKE gains a 0.050 higher MRR, 6.0% higher H@1 and 4.6% higher H@3 than RotatE, respectively."
]
}
],
"annotation_id": [
"cd2cf9dfc642183e8cca3881679e6ddc87717da8"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"radial coordinate and the angular coordinates correspond to the modulus part and the phase part, respectively"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Combining the modulus part and the phase part, HAKE maps entities into the polar coordinate system, where the radial coordinate and the angular coordinates correspond to the modulus part and the phase part, respectively. That is, HAKE maps an entity $h$ to $[\\textbf {h}_m;\\textbf {h}_p]$, where $\\textbf {h}_m$ and $\\textbf {h}_p$ are generated by the modulus part and the phase part, respectively, and $[\\,\\cdot \\,; \\,\\cdot \\,]$ denotes the concatenation of two vectors. Obviously, $([\\textbf {h}_m]_i,[\\textbf {h}_p]_i)$ is a 2D point in the polar coordinate system. Specifically, we formulate HAKE as follows:"
],
"highlighted_evidence": [
"Combining the modulus part and the phase part, HAKE maps entities into the polar coordinate system, where the radial coordinate and the angular coordinates correspond to the modulus part and the phase part, respectively."
]
}
],
"annotation_id": [
"fa3e121559d06407a693845b4c648fa2f6f90e48"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Table 1: Details of several knowledge graph embedding models, where ◦ denotes the Hadamard product, f denotes a activation function, ∗ denotes 2D convolution, and ω denotes a filter in convolutional layers. ·̄ denotes conjugate for complex vectors in ComplEx model and 2D reshaping for real vectors in ConvE model.",
"Figure 1: Simple illustration of HAKE. In a polar coordinate system, the radial coordinate aims to model entities at different levels of the hierarchy, and the angular coordinate aims to distinguish entities at the same level of the hierarchy.",
"Table 2: Statistics of datasets. The symbols #E and #R denote the number of entities and relations, respectively. #TR, #VA, and #TE denote the size of train set, validation set, and test set, respectively.",
"Table 3: Evaluation results on WN18RR, FB15k-237 and YAGO3-10 datasets. Results of TransE and RotatE are taken from Nguyen et al. (2018) and Sun et al. (2019), respectively. Other results are taken from Dettmers et al. (2018).",
"Figure 3: Distribution histograms of phases of two relations that reflect the same hierarchy. The relations in Figure (a) and (b) are drawn from WN18RR and FB15k-237, respectively.",
"Figure 2: Distribution histograms of moduli of some relations. The relations are drawn from WN18RR, FB15k-237 and YAGO3-10 dataset. The relation in (d) is /celebrities/celebrity/celebrity friends/celebrities/friendship/friend. Let friend denote the relation for simplicity.",
"Table 4: Ablation results on WN18RR, FB15k-237 and YAGO3-10 datasets. The symbols m, p, and b represent the modulus part, the phase part, and the mixture bias term, respectively.",
"Figure 4: Visualization of the embeddings of several entity pairs from WN18RR dataset.",
"Table 5: Comparison results with TKRL models (Xie, Liu, and Sun 2016) on FB15k dataset. RHE, WHE, RHE+STC, and WHE+STC are four versions of TKRL model , of which the results are taken from the original paper.",
"Figure 5: Illustration of the negative modulus of linked and unlinked entity pairs. For each pair of entities h and t, if there is a link between h and t, we label them as “Linked”. Otherwise, we label them as “Unlinked”. The x-axis represents the number of i that [hm]i and [tm]i have different signs, the y-axis represents the frequency.",
"Figure 6: Histograms of the modulus of entity embeddings. Compared with RotatE, the modulus of entity embeddings in HAKE are more dispersed, making it to have more potential to model the semantic hierarchies.",
"Figure 7: Visualization of several entity embeddings from WN18RR dataset."
],
"file": [
"3-Table1-1.png",
"4-Figure1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"6-Figure3-1.png",
"6-Figure2-1.png",
"7-Table4-1.png",
"7-Figure4-1.png",
"7-Table5-1.png",
"9-Figure5-1.png",
"9-Figure6-1.png",
"10-Figure7-1.png"
]
} |
1910.11471 | Machine Translation from Natural Language to Code using Long-Short Term Memory | Making computer programming language more understandable and easy for the human is a longstanding problem. From assembly language to present day’s object-oriented programming, concepts came to make programming easier so that a programmer can focus on the logic and the architecture rather than the code and language itself. To go a step further in this journey of removing human-computer language barrier, this paper proposes machine learning approach using Recurrent Neural Network (RNN) and Long-Short Term Memory (LSTM) to convert human language into programming language code. The programmer will write expressions for codes in layman’s language, and the machine learning model will translate it to the targeted programming language. The proposed approach yields result with 74.40% accuracy. This can be further improved by incorporating additional techniques, which are also discussed in this paper. | {
"section_name": [
"Introduction",
"Problem Description",
"Problem Description ::: Programming Language Diversity",
"Problem Description ::: Human Language Factor",
"Problem Description ::: NLP of statements",
"Proposed Methodology",
"Proposed Methodology ::: Statistical Machine Translation",
"Proposed Methodology ::: Statistical Machine Translation ::: Data Preparation",
"Proposed Methodology ::: Statistical Machine Translation ::: Vocabulary Generation",
"Proposed Methodology ::: Statistical Machine Translation ::: Neural Model Training",
"Result Analysis",
"Conclusion & Future Works",
"Acknowledgment"
],
"paragraphs": [
[
"Removing computer-human language barrier is an inevitable advancement researchers are thriving to achieve for decades. One of the stages of this advancement will be coding through natural human language instead of traditional programming language. On naturalness of computer programming D. Knuth said, “Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.”BIBREF0. Unfortunately, learning programming language is still necessary to instruct it. Researchers and developers are working to overcome this human-machine language barrier. Multiple branches exists to solve this challenge (i.e. inter-conversion of different programming language to have universally connected programming languages). Automatic code generation through natural language is not a new concept in computer science studies. However, it is difficult to create such tool due to these following three reasons–",
"Programming languages are diverse",
"An individual person expresses logical statements differently than other",
"Natural Language Processing (NLP) of programming statements is challenging since both human and programming language evolve over time",
"In this paper, a neural approach to translate pseudo-code or algorithm like human language expression into programming language code is proposed."
],
[
"Code repositories (i.e. Git, SVN) flourished in the last decade producing big data of code allowing data scientists to perform machine learning on these data. In 2017, Allamanis M et al. published a survey in which they presented the state-of-the-art of the research areas where machine learning is changing the way programmers code during software engineering and development process BIBREF1. This paper discusses what are the restricting factors of developing such text-to-code conversion method and what problems need to be solved–"
],
[
"According to the sources, there are more than a thousand actively maintained programming languages, which signifies the diversity of these language . These languages were created to achieve different purpose and use different syntaxes. Low-level languages such as assembly languages are easier to express in human language because of the low or no abstraction at all whereas high-level, or Object-Oriented Programing (OOP) languages are more diversified in syntax and expression, which is challenging to bring into a unified human language structure. Nonetheless, portability and transparency between different programming languages also remains a challenge and an open research area. George D. et al. tried to overcome this problem through XML mapping BIBREF2. They tried to convert codes from C++ to Java using XML mapping as an intermediate language. However, the authors encountered challenges to support different features of both languages."
],
[
"One of the motivations behind this paper is - as long as it is about programming, there is a finite and small set of expression which is used in human vocabulary. For instance, programmers express a for-loop in a very few specific ways BIBREF3. Variable declaration and value assignment expressions are also limited in nature. Although all codes are executable, human representation through text may not due to the semantic brittleness of code. Since high-level languages have a wide range of syntax, programmers use different linguistic expressions to explain those. For instance, small changes like swapping function arguments can significantly change the meaning of the code. Hence the challenge remains in processing human language to understand it properly which brings us to the next problem-"
],
[
"Although there is a finite set of expressions for each programming statements, it is a challenge to extract information from the statements of the code accurately. Semantic analysis of linguistic expression plays an important role in this information extraction. For instance, in case of a loop, what is the initial value? What is the step value? When will the loop terminate?",
"Mihalcea R. et al. has achieved a variable success rate of 70-80% in producing code just from the problem statement expressed in human natural language BIBREF3. They focused solely on the detection of step and loops in their research. Another research group from MIT, Lei et al. use a semantic learning model for text to detect the inputs. The model produces a parser in C++ which can successfully parse more than 70% of the textual description of input BIBREF4. The test dataset and model was initially tested and targeted against ACM-ICPC participantsínputs which contains diverse and sometimes complex input instructions.",
"A recent survey from Allamanis M. et al. presented the state-of-the-art on the area of naturalness of programming BIBREF1. A number of research works have been conducted on text-to-code or code-to-text area in recent years. In 2015, Oda et al. proposed a way to translate each line of Python code into natural language pseudocode using Statistical Machine Learning Technique (SMT) framework BIBREF5 was used. This translation framework was able to - it can successfully translate the code to natural language pseudo coded text in both English and Japanese. In the same year, Chris Q. et al. mapped natural language with simple if-this-then-that logical rules BIBREF6. Tihomir G. and Viktor K. developed an Integrated Development Environment (IDE) integrated code assistant tool anyCode for Java which can search, import and call function just by typing desired functionality through text BIBREF7. They have used model and mapping framework between function signatures and utilized resources like WordNet, Java Corpus, relational mapping to process text online and offline.",
"Recently in 2017, P. Yin and G. Neubig proposed a semantic parser which generates code through its neural model BIBREF8. They formulated a grammatical model which works as a skeleton for neural network training. The grammatical rules are defined based on the various generalized structure of the statements in the programming language."
],
[
"The use of machine learning techniques such as SMT proved to be at most 75% successful in converting human text to executable code. BIBREF9. A programming language is just like a language with less vocabulary compared to a typical human language. For instance, the code vocabulary of the training dataset was 8814 (including variable, function, class names), whereas the English vocabulary to express the same code was 13659 in total. Here, programming language is considered just like another human language and widely used SMT techniques have been applied."
],
[
"SMT techniques are widely used in Natural Language Processing (NLP). SMT plays a significant role in translation from one language to another, especially in lexical and grammatical rule extraction. In SMT, bilingual grammatical structures are automatically formed by statistical approaches instead of explicitly providing a grammatical model. This reduces months and years of work which requires significant collaboration between bi-lingual linguistics. Here, a neural network based machine translation model is used to translate regular text into programming code."
],
[
"SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language."
],
[
"To train the neural model, the texts should be converted to a computational entity. To do that, two separate vocabulary files are created - one for the source texts and another for the code. Vocabulary generation is done by tokenization of words. Afterwards, the words are put into their contextual vector space using the popular word2vec BIBREF10 method to make the words computational."
],
[
"In order to train the translation model between text-to-code an open source Neural Machine Translation (NMT) - OpenNMT implementation is utilized BIBREF11. PyTorch is used as Neural Network coding framework. For training, three types of Recurrent Neural Network (RNN) layers are used – an encoder layer, a decoder layer and an output layer. These layers together form a LSTM model. LSTM is typically used in seq2seq translation.",
"In Fig. FIGREF13, the neural model architecture is demonstrated. The diagram shows how it takes the source and target text as input and uses it for training. Vector representation of tokenized source and target text are fed into the model. Each token of the source text is passed into an encoder cell. Target text tokens are passed into a decoder cell. Encoder cells are part of the encoder RNN layer and decoder cells are part of the decoder RNN layer. End of the input sequence is marked by a $<$eos$>$ token. Upon getting the $<$eos$>$ token, the final cell state of encoder layer initiate the output layer sequence. At each target cell state, attention is applied with the encoder RNN state and combined with the current hidden state to produce the prediction of next target token. This predictions are then fed back to the target RNN. Attention mechanism helps us to overcome the fixed length restriction of encoder-decoder sequence and allows us to process variable length between input and output sequence. Attention uses encoder state and pass it to the decoder cell to give particular attention to the start of an output layer sequence. The encoder uses an initial state to tell the decoder what it is supposed to generate. Effectively, the decoder learns to generate target tokens, conditioned on the input sequence. Sigmoidal optimization is used to optimize the prediction."
],
[
"Training parallel corpus had 18805 lines of annotated code in it. The training model is executed several times with different training parameters. During the final training process, 500 validation data is used to generate the recurrent neural model, which is 3% of the training data. We run the training with epoch value of 10 with a batch size of 64. After finishing the training, the accuracy of the generated model using validation data from the source corpus was 74.40% (Fig. FIGREF17).",
"Although the generated code is incoherent and often predict wrong code token, this is expected because of the limited amount of training data. LSTM generally requires a more extensive set of data (100k+ in such scenario) to build a more accurate model. The incoherence can be resolved by incorporating coding syntax tree model in future. For instance–",
"\"define the method tzname with 2 arguments: self and dt.\"",
"is translated into–",
"def __init__ ( self , regex ) :.",
"The translator is successfully generating the whole codeline automatically but missing the noun part (parameter and function name) part of the syntax."
],
[
"The main advantage of translating to a programming language is - it has a concrete and strict lexical and grammatical structure which human languages lack. The aim of this paper was to make the text-to-code framework work for general purpose programming language, primarily Python. In later phase, phrase-based word embedding can be incorporated for improved vocabulary mapping. To get more accurate target code for each line, Abstract Syntax Tree(AST) can be beneficial.",
"The contribution of this research is a machine learning model which can turn the human expression into coding expressions. This paper also discusses available methods which convert natural language to programming language successfully in fixed or tightly bounded linguistic paradigm. Approaching this problem using machine learning will give us the opportunity to explore the possibility of unified programming interface as well in the future."
],
[
"We would like to thank Dr. Khandaker Tabin Hasan, Head of the Depertment of Computer Science, American International University-Bangladesh for his inspiration and encouragement in all of our research works. Also, thanks to Future Technology Conference - 2019 committee for partially supporting us to join the conference and one of our colleague - Faheem Abrar, Software Developer for his thorough review and comments on this research work and supporting us by providing fund."
]
]
} | {
"question": [
"What additional techniques are incorporated?",
"What dataset do they use?",
"Do they compare to other models?",
"What is the architecture of the system?",
"How long are expressions in layman's language?",
"What additional techniques could be incorporated to further improve accuracy?",
"What programming language is target language?",
"What dataset is used to measure accuracy?"
],
"question_id": [
"db9021ddd4593f6fadf172710468e2fdcea99674",
"8ea4bd4c1d8a466da386d16e4844ea932c44a412",
"92240eeab107a4f636705b88f00cefc4f0782846",
"4196d329061f5a9d147e1e77aeed6a6bd9b35d18",
"a37e4a21ba98b0259c36deca0d298194fa611d2f",
"321429282557e79061fe2fe02a9467f3d0118cdd",
"891cab2e41d6ba962778bda297592c916b432226",
"1eeabfde99594b8d9c6a007f50b97f7f527b0a17"
],
"nlp_background": [
"two",
"two",
"two",
"two",
"zero",
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
"",
"computer vision",
"computer vision",
"computer vision",
"computer vision"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
},
{
"unanswerable": false,
"extractive_spans": [
"incorporating coding syntax tree model"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Although the generated code is incoherent and often predict wrong code token, this is expected because of the limited amount of training data. LSTM generally requires a more extensive set of data (100k+ in such scenario) to build a more accurate model. The incoherence can be resolved by incorporating coding syntax tree model in future. For instance–",
"\"define the method tzname with 2 arguments: self and dt.\"",
"is translated into–",
"def __init__ ( self , regex ) :.",
"The translator is successfully generating the whole codeline automatically but missing the noun part (parameter and function name) part of the syntax."
],
"highlighted_evidence": [
"Although the generated code is incoherent and often predict wrong code token, this is expected because of the limited amount of training data. LSTM generally requires a more extensive set of data (100k+ in such scenario) to build a more accurate model. The incoherence can be resolved by incorporating coding syntax tree model in future. For instance–\n\n\"define the method tzname with 2 arguments: self and dt.\"\n\nis translated into–\n\ndef __init__ ( self , regex ) :.\n\nThe translator is successfully generating the whole codeline automatically but missing the noun part (parameter and function name) part of the syntax."
]
}
],
"annotation_id": [
"712162ee41fcd33e17f5974b52db5ef08caa28ef",
"ca3b72709cbea8e97d402eef60ef949c8818ae6f"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "A parallel corpus where the source is an English expression of code and the target is Python code.",
"evidence": [
"SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language."
],
"highlighted_evidence": [
"SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language."
]
},
{
"unanswerable": false,
"extractive_spans": [
" text-code parallel corpus"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language."
],
"highlighted_evidence": [
"A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it ."
]
}
],
"annotation_id": [
"4fe5615cf767f286711731cd0059c208e82a0974",
"e21d60356450f2765a322002352ee1b8ceb50253"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"f669e556321ae49a72f0b9be6c4b7831e37edf1d"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"seq2seq translation"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In order to train the translation model between text-to-code an open source Neural Machine Translation (NMT) - OpenNMT implementation is utilized BIBREF11. PyTorch is used as Neural Network coding framework. For training, three types of Recurrent Neural Network (RNN) layers are used – an encoder layer, a decoder layer and an output layer. These layers together form a LSTM model. LSTM is typically used in seq2seq translation."
],
"highlighted_evidence": [
"For training, three types of Recurrent Neural Network (RNN) layers are used – an encoder layer, a decoder layer and an output layer. These layers together form a LSTM model. LSTM is typically used in seq2seq translation."
]
}
],
"annotation_id": [
"c499a5ca56894e542c2c4eabe925b81a2ea4618e"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"469be6a5ce7968933dd77a4449dd88ee01d3d579"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"phrase-based word embedding",
"Abstract Syntax Tree(AST)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The main advantage of translating to a programming language is - it has a concrete and strict lexical and grammatical structure which human languages lack. The aim of this paper was to make the text-to-code framework work for general purpose programming language, primarily Python. In later phase, phrase-based word embedding can be incorporated for improved vocabulary mapping. To get more accurate target code for each line, Abstract Syntax Tree(AST) can be beneficial."
],
"highlighted_evidence": [
"In later phase, phrase-based word embedding can be incorporated for improved vocabulary mapping. To get more accurate target code for each line, Abstract Syntax Tree(AST) can be beneficial."
]
}
],
"annotation_id": [
"2ef1c3976eec3f9d17efac630b098f10d86931e4"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Python"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language."
],
"highlighted_evidence": [
"In target data, the code is written in Python programming language."
]
}
],
"annotation_id": [
"3aa253475c66a97de49bc647af6be28b75a92be4"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"validation data"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Training parallel corpus had 18805 lines of annotated code in it. The training model is executed several times with different training parameters. During the final training process, 500 validation data is used to generate the recurrent neural model, which is 3% of the training data. We run the training with epoch value of 10 with a batch size of 64. After finishing the training, the accuracy of the generated model using validation data from the source corpus was 74.40% (Fig. FIGREF17)."
],
"highlighted_evidence": [
"During the final training process, 500 validation data is used to generate the recurrent neural model, which is 3% of the training data.",
"After finishing the training, the accuracy of the generated model using validation data from the source corpus was 74.40%"
]
}
],
"annotation_id": [
"d07da696fb0d6e94d658c0950e239bb87edb1633"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Fig. 1. Text-Code bi-lingual corpus",
"Fig. 2. Neural training model architecture of Text-To-Code",
"Fig. 3. Accuracy gain in progress of training the RNN"
],
"file": [
"4-Figure1-1.png",
"5-Figure2-1.png",
"6-Figure3-1.png"
]
} |
1910.09399 | A Survey and Taxonomy of Adversarial Neural Networks for Text-to-Image Synthesis | Text-to-image synthesis refers to computational methods which translate human written textual descriptions, in the form of keywords or sentences, into images with similar semantic meaning to the text. In earlier research, image synthesis relied mainly on word to image correlation analysis combined with supervised methods to find best alignment of the visual content matching to the text. Recent progress in deep learning (DL) has brought a new set of unsupervised deep learning methods, particularly deep generative models which are able to generate realistic visual images using suitably trained neural network models. In this paper, we review the most recent development in the text-to-image synthesis research domain. Our survey first introduces image synthesis and its challenges, and then reviews key concepts such as generative adversarial networks (GANs) and deep convolutional encoder-decoder neural networks (DCNN). After that, we propose a taxonomy to summarize GAN based text-to-image synthesis into four major categories: Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANS, and Motion Enhancement GANs. We elaborate the main objective of each group, and further review typical GAN architectures in each group. The taxonomy and the review outline the techniques and the evolution of different approaches, and eventually provide a clear roadmap to summarize the list of contemporaneous solutions that utilize GANs and DCNNs to generate enthralling results in categories such as human faces, birds, flowers, room interiors, object reconstruction from edge maps (games) etc. The survey will conclude with a comparison of the proposed solutions, challenges that remain unresolved, and future developments in the text-to-image synthesis domain. | {
"section_name": [
"Introduction",
"Introduction ::: blackTraditional Learning Based Text-to-image Synthesis",
"Introduction ::: GAN Based Text-to-image Synthesis",
"Related Work",
"Preliminaries and Frameworks",
"Preliminaries and Frameworks ::: Generative Adversarial Neural Network",
"Preliminaries and Frameworks ::: cGAN: Conditional GAN",
"Preliminaries and Frameworks ::: Simple GAN Frameworks for Text-to-Image Synthesis",
"Preliminaries and Frameworks ::: Advanced GAN Frameworks for Text-to-Image Synthesis",
"Text-to-Image Synthesis Taxonomy and Categorization",
"Text-to-Image Synthesis Taxonomy and Categorization ::: GAN based Text-to-Image Synthesis Taxonomy",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: DC-GAN",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: DC-GAN Extensions",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: MC-GAN",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: StackGAN",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: StackGAN++",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: AttnGAN",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: HDGAN",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: AC-GAN",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: TAC-GAN",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: Text-SeGAN",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: MirrorGAN and Scene Graph GAN",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: ObamaNet and T2S",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: T2V",
"Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: StoryGAN",
"GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Applications",
"GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Benchmark Datasets",
"GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Benchmark Evaluation Metrics",
"GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: GAN Based Text-to-image Synthesis Results Comparison",
"GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Notable Mentions",
"Conclusion",
"conflict of interest"
],
"paragraphs": [
[
"“ (GANs), and the variations that are now being proposed is the most interesting idea in the last 10 years in ML, in my opinion.” (2016)",
"– Yann LeCun",
"A picture is worth a thousand words! While written text provide efficient, effective, and concise ways for communication, visual content, such as images, is a more comprehensive, accurate, and intelligible method of information sharing and understanding. Generation of images from text descriptions, i.e. text-to-image synthesis, is a complex computer vision and machine learning problem that has seen great progress over recent years. Automatic image generation from natural language may allow users to describe visual elements through visually-rich text descriptions. The ability to do so effectively is highly desirable as it could be used in artificial intelligence applications such as computer-aided design, image editing BIBREF0, BIBREF1, game engines for the development of the next generation of video gamesBIBREF2, and pictorial art generation BIBREF3."
],
[
"In the early stages of research, text-to-image synthesis was mainly carried out through a search and supervised learning combined process BIBREF4, as shown in Figure FIGREF4. In order to connect text descriptions to images, one could use correlation between keywords (or keyphrase) & images that identifies informative and “picturable” text units; then, these units would search for the most likely image parts conditioned on the text, eventually optimizing the picture layout conditioned on both the text and the image parts. Such methods often integrated multiple artificial intelligence key components, including natural language processing, computer vision, computer graphics, and machine learning.",
"The major limitation of the traditional learning based text-to-image synthesis approaches is that they lack the ability to generate new image content; they can only change the characteristics of the given/training images. Alternatively, research in generative models has advanced significantly and delivers solutions to learn from training images and produce new visual content. For example, Attribute2Image BIBREF5 models each image as a composite of foreground and background. In addition, a layered generative model with disentangled latent variables is learned, using a variational auto-encoder, to generate visual content. Because the learning is customized/conditioned by given attributes, the generative models of Attribute2Image can generate images with respect to different attributes, such as gender, hair color, age, etc., as shown in Figure FIGREF5."
],
[
"Although generative model based text-to-image synthesis provides much more realistic image synthesis results, the image generation is still conditioned by the limited attributes. In recent years, several papers have been published on the subject of text-to-image synthesis. Most of the contributions from these papers rely on multimodal learning approaches that include generative adversarial networks and deep convolutional decoder networks as their main drivers to generate entrancing images from text BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11.",
"First introduced by Ian Goodfellow et al. BIBREF9, generative adversarial networks (GANs) consist of two neural networks paired with a discriminator and a generator. These two models compete with one another, with the generator attempting to produce synthetic/fake samples that will fool the discriminator and the discriminator attempting to differentiate between real (genuine) and synthetic samples. Because GANs' adversarial training aims to cause generators to produce images similar to the real (training) images, GANs can naturally be used to generate synthetic images (image synthesis), and this process can even be customized further by using text descriptions to specify the types of images to generate, as shown in Figure FIGREF6.",
"Much like text-to-speech and speech-to-text conversion, there exists a wide variety of problems that text-to-image synthesis could solve in the computer vision field specifically BIBREF8, BIBREF12. Nowadays, researchers are attempting to solve a plethora of computer vision problems with the aid of deep convolutional networks, generative adversarial networks, and a combination of multiple methods, often called multimodal learning methods BIBREF8. For simplicity, multiple learning methods will be referred to as multimodal learning hereafter BIBREF13. Researchers often describe multimodal learning as a method that incorporates characteristics from several methods, algorithms, and ideas. This can include ideas from two or more learning approaches in order to create a robust implementation to solve an uncommon problem or improve a solution BIBREF8, BIBREF14, BIBREF15, BIBREF16, BIBREF17.",
"black In this survey, we focus primarily on reviewing recent works that aim to solve the challenge of text-to-image synthesis using generative adversarial networks (GANs). In order to provide a clear roadmap, we propose a taxonomy to summarize reviewed GANs into four major categories. Our review will elaborate the motivations of methods in each category, analyze typical models, their network architectures, and possible drawbacks for further improvement. The visual abstract of the survey and the list of reviewed GAN frameworks is shown in Figure FIGREF8.",
"black The remainder of the survey is organized as follows. Section 2 presents a brief summary of existing works on subjects similar to that of this paper and highlights the key distinctions making ours unique. Section 3 gives a short introduction to GANs and some preliminary concepts related to image generation, as they are the engines that make text-to-image synthesis possible and are essential building blocks to achieve photo-realistic images from text descriptions. Section 4 proposes a taxonomy to summarize GAN based text-to-image synthesis, discusses models and architectures of novel works focused solely on text-to-image synthesis. This section will also draw key contributions from these works in relation to their applications. Section 5 reviews GAN based text-to-image synthesis benchmarks, performance metrics, and comparisons, including a simple review of GANs for other applications. In section 6, we conclude with a brief summary and outline ideas for future interesting developments in the field of text-to-image synthesis."
],
[
"With the growth and success of GANs, deep convolutional decoder networks, and multimodal learning methods, these techniques were some of the first procedures which aimed to solve the challenge of image synthesis. Many engineers and scientists in computer vision and AI have contributed through extensive studies and experiments, with numerous proposals and publications detailing their contributions. Because GANs, introduced by BIBREF9, are emerging research topics, their practical applications to image synthesis are still in their infancy. Recently, many new GAN architectures and designs have been proposed to use GANs for different applications, e.g. using GANs to generate sentimental texts BIBREF18, or using GANs to transform natural images into cartoons BIBREF19.",
"Although GANs are becoming increasingly popular, very few survey papers currently exist to summarize and outline contemporaneous technical innovations and contributions of different GAN architectures BIBREF20, BIBREF21. Survey papers specifically attuned to analyzing different contributions to text-to-image synthesis using GANs are even more scarce. We have thus found two surveys BIBREF6, BIBREF7 on image synthesis using GANs, which are the two most closely related publications to our survey objective. In the following paragraphs, we briefly summarize each of these surveys and point out how our objectives differ from theirs.",
"In BIBREF6, the authors provide an overview of image synthesis using GANs. In this survey, the authors discuss the motivations for research on image synthesis and introduce some background information on the history of GANs, including a section dedicated to core concepts of GANs, namely generators, discriminators, and the min-max game analogy, and some enhancements to the original GAN model, such as conditional GANs, addition of variational auto-encoders, etc.. In this survey, we will carry out a similar review of the background knowledge because the understanding of these preliminary concepts is paramount for the rest of the paper. Three types of approaches for image generation are reviewed, including direct methods (single generator and discriminator), hierarchical methods (two or more generator-discriminator pairs, each with a different goal), and iterative methods (each generator-discriminator pair generates a gradually higher-resolution image). Following the introduction, BIBREF6 discusses methods for text-to-image and image-to-image synthesis, respectively, and also describes several evaluation metrics for synthetic images, including inception scores and Frechet Inception Distance (FID), and explains the significance of the discriminators acting as learned loss functions as opposed to fixed loss functions.",
"Different from the above survey, which has a relatively broad scope in GANs, our objective is heavily focused on text-to-image synthesis. Although this topic, text-to-image synthesis, has indeed been covered in BIBREF6, they did so in a much less detailed fashion, mostly listing the many different works in a time-sequential order. In comparison, we will review several representative methods in the field and outline their models and contributions in detail.",
"Similarly to BIBREF6, the second survey paper BIBREF7 begins with a standard introduction addressing the motivation of image synthesis and the challenges it presents followed by a section dedicated to core concepts of GANs and enhancements to the original GAN model. In addition, the paper covers the review of two types of applications: (1) unconstrained applications of image synthesis such as super-resolution, image inpainting, etc., and (2) constrained image synthesis applications, namely image-to-image, text-to-image, and sketch-to image, and also discusses image and video editing using GANs. Again, the scope of this paper is intrinsically comprehensive, while we focus specifically on text-to-image and go into more detail regarding the contributions of novel state-of-the-art models.",
"Other surveys have been published on related matters, mainly related to the advancements and applications of GANs BIBREF22, BIBREF23, but we have not found any prior works which focus specifically on text-to-image synthesis using GANs. To our knowledge, this is the first paper to do so.",
"black"
],
[
"In this section, we first introduce preliminary knowledge of GANs and one of its commonly used variants, conditional GAN (i.e. cGAN), which is the building block for many GAN based text-to-image synthesis models. After that, we briefly separate GAN based text-to-image synthesis into two types, Simple GAN frameworks vs. Advanced GAN frameworks, and discuss why advanced GAN architecture for image synthesis.",
"black Notice that the simple vs. advanced GAN framework separation is rather too brief, our taxonomy in the next section will propose a taxonomy to summarize advanced GAN frameworks into four categories, based on their objective and designs."
],
[
"Before moving on to a discussion and analysis of works applying GANs for text-to-image synthesis, there are some preliminary concepts, enhancements of GANs, datasets, and evaluation metrics that are present in some of the works described in the next section and are thus worth introducing.",
"As stated previously, GANs were introduced by Ian Goodfellow et al. BIBREF9 in 2014, and consist of two deep neural networks, a generator and a discriminator, which are trained independently with conflicting goals: The generator aims to generate samples closely related to the original data distribution and fool the discriminator, while the discriminator aims to distinguish between samples from the generator model and samples from the true data distribution by calculating the probability of the sample coming from either source. A conceptual view of the generative adversarial network (GAN) architecture is shown in Figure FIGREF11.",
"The training of GANs is an iterative process that, with each iteration, updates the generator and the discriminator with the goal of each defeating the other. leading each model to become increasingly adept at its specific task until a threshold is reached. This is analogous to a min-max game between the two models, according to the following equation:",
"In Eq. (DISPLAY_FORM10), $x$ denotes a multi-dimensional sample, e.g., an image, and $z$ denotes a multi-dimensional latent space vector, e.g., a multidimensional data point following a predefined distribution function such as that of normal distributions. $D_{\\theta _d}()$ denotes a discriminator function, controlled by parameters $\\theta _d$, which aims to classify a sample into a binary space. $G_{\\theta _g}()$ denotes a generator function, controlled by parameters $\\theta _g$, which aims to generate a sample from some latent space vector. For example, $G_{\\theta _g}(z)$ means using a latent vector $z$ to generate a synthetic/fake image, and $D_{\\theta _d}(x)$ means to classify an image $x$ as binary output (i.e. true/false or 1/0). In the GAN setting, the discriminator $D_{\\theta _d}()$ is learned to distinguish a genuine/true image (labeled as 1) from fake images (labeled as 0). Therefore, given a true image $x$, the ideal output from the discriminator $D_{\\theta _d}(x)$ would be 1. Given a fake image generated from the generator $G_{\\theta _g}(z)$, the ideal prediction from the discriminator $D_{\\theta _d}(G_{\\theta _g}(z))$ would be 0, indicating the sample is a fake image.",
"Following the above definition, the $\\min \\max $ objective function in Eq. (DISPLAY_FORM10) aims to learn parameters for the discriminator ($\\theta _d$) and generator ($\\theta _g$) to reach an optimization goal: The discriminator intends to differentiate true vs. fake images with maximum capability $\\max _{\\theta _d}$ whereas the generator intends to minimize the difference between a fake image vs. a true image $\\min _{\\theta _g}$. In other words, the discriminator sets the characteristics and the generator produces elements, often images, iteratively until it meets the attributes set forth by the discriminator. GANs are often used with images and other visual elements and are notoriously efficient in generating compelling and convincing photorealistic images. Most recently, GANs were used to generate an original painting in an unsupervised fashion BIBREF24. The following sections go into further detail regarding how the generator and discriminator are trained in GANs.",
"Generator - In image synthesis, the generator network can be thought of as a mapping from one representation space (latent space) to another (actual data) BIBREF21. When it comes to image synthesis, all of the images in the data space fall into some distribution in a very complex and high-dimensional feature space. Sampling from such a complex space is very difficult, so GANs instead train a generator to create synthetic images from a much more simple feature space (usually random noise) called the latent space. The generator network performs up-sampling of the latent space and is usually a deep neural network consisting of several convolutional and/or fully connected layers BIBREF21. The generator is trained using gradient descent to update the weights of the generator network with the aim of producing data (in our case, images) that the discriminator classifies as real.",
"Discriminator - The discriminator network can be thought of as a mapping from image data to the probability of the image coming from the real data space, and is also generally a deep neural network consisting of several convolution and/or fully connected layers. However, the discriminator performs down-sampling as opposed to up-sampling. Like the generator, it is trained using gradient descent but its goal is to update the weights so that it is more likely to correctly classify images as real or fake.",
"In GANs, the ideal outcome is for both the generator's and discriminator's cost functions to converge so that the generator produces photo-realistic images that are indistinguishable from real data, and the discriminator at the same time becomes an expert at differentiating between real and synthetic data. This, however, is not possible since a reduction in cost of one model generally leads to an increase in cost of the other. This phenomenon makes training GANs very difficult, and training them simultaneously (both models performing gradient descent in parallel) often leads to a stable orbit where neither model is able to converge. To combat this, the generator and discriminator are often trained independently. In this case, the GAN remains the same, but there are different training stages. In one stage, the weights of the generator are kept constant and gradient descent updates the weights of the discriminator, and in the other stage the weights of the discriminator are kept constant while gradient descent updates the weights of the generator. This is repeated for some number of epochs until a desired low cost for each model is reached BIBREF25."
],
[
"Conditional Generative Adversarial Networks (cGAN) are an enhancement of GANs proposed by BIBREF26 shortly after the introduction of GANs by BIBREF9. The objective function of the cGAN is defined in Eq. (DISPLAY_FORM13) which is very similar to the GAN objective function in Eq. (DISPLAY_FORM10) except that the inputs to both discriminator and generator are conditioned by a class label $y$.",
"The main technical innovation of cGAN is that it introduces an additional input or inputs to the original GAN model, allowing the model to be trained on information such as class labels or other conditioning variables as well as the samples themselves, concurrently. Whereas the original GAN was trained only with samples from the data distribution, resulting in the generated sample reflecting the general data distribution, cGAN enables directing the model to generate more tailored outputs.",
"In Figure FIGREF14, the condition vector is the class label (text string) \"Red bird\", which is fed to both the generator and discriminator. It is important, however, that the condition vector is related to the real data. If the model in Figure FIGREF14 was trained with the same set of real data (red birds) but the condition text was \"Yellow fish\", the generator would learn to create images of red birds when conditioned with the text \"Yellow fish\".",
"Note that the condition vector in cGAN can come in many forms, such as texts, not just limited to the class label. Such a unique design provides a direct solution to generate images conditioned by predefined specifications. As a result, cGAN has been used in text-to-image synthesis since the very first day of its invention although modern approaches can deliver much better text-to-image synthesis results.",
"black"
],
[
"In order to generate images from text, one simple solution is to employ the conditional GAN (cGAN) designs and add conditions to the training samples, such that the GAN is trained with respect to the underlying conditions. Several pioneer works have followed similar designs for text-to-image synthesis.",
"black An essential disadvantage of using cGAN for text-to-image synthesis is that that it cannot handle complicated textual descriptions for image generation, because cGAN uses labels as conditions to restrict the GAN inputs. If the text inputs have multiple keywords (or long text descriptions) they cannot be used simultaneously to restrict the input. Instead of using text as conditions, another two approaches BIBREF8, BIBREF16 use text as input features, and concatenate such features with other features to train discriminator and generator, as shown in Figure FIGREF15(b) and (c). To ensure text being used as GAN input, a feature embedding or feature representation learning BIBREF29, BIBREF30 function $\\varphi ()$ is often introduced to convert input text as numeric features, which are further concatenated with other features to train GANs.",
"black"
],
[
"Motivated by the GAN and conditional GAN (cGAN) design, many GAN based frameworks have been proposed to generate images, with different designs and architectures, such as using multiple discriminators, using progressively trained discriminators, or using hierarchical discriminators. Figure FIGREF17 outlines several advanced GAN frameworks in the literature. In addition to these frameworks, many news designs are being proposed to advance the field with rather sophisticated designs. For example, a recent work BIBREF37 proposes to use a pyramid generator and three independent discriminators, blackeach focusing on a different aspect of the images, to lead the generator towards creating images that are photo-realistic on multiple levels. Another recent publication BIBREF38 proposes to use discriminator to measure semantic relevance between image and text instead of class prediction (like most discriminator in GANs does), resulting a new GAN structure outperforming text conditioned auxiliary classifier (TAC-GAN) BIBREF16 and generating diverse, realistic, and relevant to the input text regardless of class.",
"black In the following section, we will first propose a taxonomy that summarizes advanced GAN frameworks for text-to-image synthesis, and review most recent proposed solutions to the challenge of generating photo-realistic images conditioned on natural language text descriptions using GANs. The solutions we discuss are selected based on relevance and quality of contributions. Many publications exist on the subject of image-generation using GANs, but in this paper we focus specifically on models for text-to-image synthesis, with the review emphasizing on the “model” and “contributions” for text-to-image synthesis. At the end of this section, we also briefly review methods using GANs for other image-synthesis applications.",
"black"
],
[
"In this section, we propose a taxonomy to summarize advanced GAN based text-to-image synthesis frameworks, as shown in Figure FIGREF24. The taxonomy organizes GAN frameworks into four categories, including Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANs, and Motion Enhancement GAGs. Following the proposed taxonomy, each subsection will introduce several typical frameworks and address their techniques of using GANS to solve certain aspects of the text-to-mage synthesis challenges.",
"black"
],
[
"Although the ultimate goal of Text-to-Image synthesis is to generate images closely related to the textual descriptions, the relevance of the images to the texts are often validated from different perspectives, due to the inherent diversity of human perceptions. For example, when generating images matching to the description “rose flowers”, some users many know the exact type of flowers they like and intend to generate rose flowers with similar colors. Other users, may seek to generate high quality rose flowers with a nice background (e.g. garden). The third group of users may be more interested in generating flowers similar to rose but with different colors and visual appearance, e.g. roses, begonia, and peony. The fourth group of users may want to not only generate flower images, but also use them to form a meaningful action, e.g. a video clip showing flower growth, performing a magic show using those flowers, or telling a love story using the flowers.",
"blackFrom the text-to-Image synthesis point of view, the first group of users intend to precisely control the semantic of the generated images, and their goal is to match the texts and images at the semantic level. The second group of users are more focused on the resolutions and the qualify of the images, in addition to the requirement that the images and texts are semantically related. For the third group of users, their goal is to diversify the output images, such that their images carry diversified visual appearances and are also semantically related. The fourth user group adds a new dimension in image synthesis, and aims to generate sequences of images which are coherent in temporal order, i.e. capture the motion information.",
"black Based on the above descriptions, we categorize GAN based Text-to-Image Synthesis into a taxonomy with four major categories, as shown in Fig. FIGREF24.",
"Semantic Enhancement GANs: Semantic enhancement GANs represent pioneer works of GAN frameworks for text-to-image synthesis. The main focus of the GAN frameworks is to ensure that the generated images are semantically related to the input texts. This objective is mainly achieved by using a neural network to encode texts as dense features, which are further fed to a second network to generate images matching to the texts.",
"Resolution Enhancement GANs: Resolution enhancement GANs mainly focus on generating high qualify images which are semantically matched to the texts. This is mainly achieved through a multi-stage GAN framework, where the outputs from earlier stage GANs are fed to the second (or later) stage GAN to generate better qualify images.",
"Diversity Enhancement GANs: Diversity enhancement GANs intend to diversify the output images, such that the generated images are not only semantically related but also have different types and visual appearance. This objective is mainly achieved through an additional component to estimate semantic relevance between generated images and texts, in order to maximize the output diversity.",
"Motion Enhancement GANs: Motion enhancement GANs intend to add a temporal dimension to the output images, such that they can form meaningful actions with respect to the text descriptions. This goal mainly achieved though a two-step process which first generates images matching to the “actions” of the texts, followed by a mapping or alignment procedure to ensure that images are coherent in the temporal order.",
"black In the following, we will introduce how these GAN frameworks evolve for text-to-image synthesis, and will also review some typical methods of each category.",
"black"
],
[
"Semantic relevance is one the of most important criteria of the text-to-image synthesis. For most GNAs discussed in this survey, they are required to generate images semantically related to the text descriptions. However, the semantic relevance is a rather subjective measure, and images are inherently rich in terms of its semantics and interpretations. Therefore, many GANs are further proposed to enhance the text-to-image synthesis from different perspectives. In this subsection, we will review several classical approaches which are commonly served as text-to-image synthesis baseline.",
"black"
],
[
"Deep convolution generative adversarial network (DC-GAN) BIBREF8 represents the pioneer work for text-to-image synthesis using GANs. Its main goal is to train a deep convolutional generative adversarial network (DC-GAN) on text features. During this process these text features are encoded by another neural network. This neural network is a hybrid convolutional recurrent network at the character level. Concurrently, both neural networks have also feed-forward inference in the way they condition text features. Generating realistic images automatically from natural language text is the motivation of several of the works proposed in this computer vision field. However, actual artificial intelligence (AI) systems are far from achieving this task BIBREF8, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. Lately, recurrent neural networks led the way to develop frameworks that learn discriminatively on text features. At the same time, generative adversarial networks (GANs) began recently to show some promise on generating compelling images of a whole host of elements including but not limited to faces, birds, flowers, and non-common images such as room interiorsBIBREF8. DC-GAN is a multimodal learning model that attempts to bridge together both of the above mentioned unsupervised machine learning algorithms, the recurrent neural networks (RNN) and generative adversarial networks (GANs), with the sole purpose of speeding the generation of text-to-image synthesis.",
"black Deep learning shed some light to some of the most sophisticated advances in natural language representation, image synthesis BIBREF7, BIBREF8, BIBREF43, BIBREF35, and classification of generic data BIBREF44. However, a bulk of the latest breakthroughs in deep learning and computer vision were related to supervised learning BIBREF8. Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning, unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems: text-based natural language and image synthesis BIBREF45, BIBREF14, BIBREF8, BIBREF46, BIBREF47. These subproblems are typically subdivided as focused research areas. DC-GAN's contributions are mainly driven by these two research areas. In order to generate plausible images from natural language, DC-GAN contributions revolve around developing a straightforward yet effective GAN architecture and training strategy that allows natural text to image synthesis. These contributions are primarily tested on the Caltech-UCSD Birds and Oxford-102 Flowers datasets. Each image in these datasets carry five text descriptions. These text descriptions were created by the research team when setting up the evaluation environment. The DC-GANs model is subsequently trained on several subcategories. Subcategories in this research represent the training and testing sub datasets. The performance shown by these experiments display a promising yet effective way to generate images from textual natural language descriptions BIBREF8.",
"black"
],
[
"Following the pioneer DC-GAN framework BIBREF8, many researches propose revised network structures (e.g. different discriminaotrs) in order to improve images with better semantic relevance to the texts. Based on the deep convolutional adversarial network (DC-GAN) network architecture, GAN-CLS with image-text matching discriminator, GAN-INT learned with text manifold interpolation and GAN-INT-CLS which combines both are proposed to find semantic match between text and image. Similar to the DC-GAN architecture, an adaptive loss function (i.e. Perceptual Loss BIBREF48) is proposed for semantic image synthesis which can synthesize a realistic image that not only matches the target text description but also keep the irrelavant features(e.g. background) from source images BIBREF49. Regarding to the Perceptual Losses, three loss functions (i.e. Pixel reconstruction loss, Activation reconstruction loss and Texture reconstruction loss) are proposed in BIBREF50 in which they construct the network architectures based on the DC-GAN, i.e. GAN-INT-CLS-Pixel, GAN-INT-CLS-VGG and GAN-INT-CLS-Gram with respect to three losses. In BIBREF49, a residual transformation unit is added in the network to retain similar structure of the source image.",
"black Following the BIBREF49 and considering the features in early layers address background while foreground is obtained in latter layers in CNN, a pair of discriminators with different architectures (i.e. Paired-D GAN) is proposed to synthesize background and foreground from a source image seperately BIBREF51. Meanwhile, the skip-connection in the generator is employed to more precisely retain background information in the source image.",
"black"
],
[
"When synthesising images, most text-to-image synthesis methods consider each output image as one single unit to characterize its semantic relevance to the texts. This is likely problematic because most images naturally consist of two crucial components: foreground and background. Without properly separating these two components, it's hard to characterize the semantics of an image if the whole image is treated as a single unit without proper separation.",
"black In order to enhance the semantic relevance of the images, a multi-conditional GAN (MC-GAN) BIBREF52 is proposed to synthesize a target image by combining the background of a source image and a text-described foreground object which does not exist in the source image. A unique feature of MC-GAN is that it proposes a synthesis block in which the background feature is extracted from the given image without non-linear function (i.e. only using convolution and batch normalization) and the foreground feature is the feature map from the previous layer.",
"black Because MC-GAN is able to properly model the background and foreground of the generated images, a unique strength of MC-GAN is that users are able to provide a base image and MC-GAN is able to preserve the background information of the base image to generate new images. black"
],
[
"Due to the fact that training GANs will be much difficult when generating high-resolution images, a two stage GAN (i.e. stackGAN) is proposed in which rough images(i.e. low-resolution images) are generated in stage-I and refined in stage-II. To further improve the quality of generated images, the second version of StackGAN (i.e. Stack++) is proposed to use multi-stage GANs to generate multi-scale images. A color-consistency regularization term is also added into the loss to keep the consistency of images in different scales.",
"black While stackGAN and StackGAN++ are both built on the global sentence vector, AttnGAN is proposed to use attention mechanism (i.e. Deep Attentional Multimodal Similarity Model (DAMSM)) to model the multi-level information (i.e. word level and sentence level) into GANs. In the following, StackGAN, StackGAN++ and AttnGAN will be explained in detail.",
"black Recently, Dynamic Memory Generative Adversarial Network (i.e. DM-GAN)BIBREF53 which uses a dynamic memory component is proposed to focus on refiningthe initial generated image which is the key to the success of generating high quality images."
],
[
"In 2017, Zhang et al. proposed a model for generating photo-realistic images from text descriptions called StackGAN (Stacked Generative Adversarial Network) BIBREF33. In their work, they define a two-stage model that uses two cascaded GANs, each corresponding to one of the stages. The stage I GAN takes a text description as input, converts the text description to a text embedding containing several conditioning variables, and generates a low-quality 64x64 image with rough shapes and colors based on the computed conditioning variables. The stage II GAN then takes this low-quality stage I image as well as the same text embedding and uses the conditioning variables to correct and add more detail to the stage I result. The output of stage II is a photorealistic 256$times$256 image that resembles the text description with compelling accuracy.",
"One major contribution of StackGAN is the use of cascaded GANs for text-to-image synthesis through a sketch-refinement process. By conditioning the stage II GAN on the image produced by the stage I GAN and text description, the stage II GAN is able to correct defects in the stage I output, resulting in high-quality 256x256 images. Prior works have utilized “stacked” GANs to separate the image generation process into structure and style BIBREF42, multiple stages each generating lower-level representations from higher-level representations of the previous stage BIBREF35, and multiple stages combined with a laplacian pyramid approach BIBREF54, which was introduced for image compression by P. Burt and E. Adelson in 1983 and uses the differences between consecutive down-samples of an original image to reconstruct the original image from its down-sampled version BIBREF55. However, these works did not use text descriptions to condition their generator models.",
"Conditioning Augmentation is the other major contribution of StackGAN. Prior works transformed the natural language text description into a fixed text embedding containing static conditioning variables which were fed to the generator BIBREF8. StackGAN does this and then creates a Gaussian distribution from the text embedding and randomly selects variables from the Gaussian distribution to add to the set of conditioning variables during training. This encourages robustness by introducing small variations to the original text embedding for a particular training image while keeping the training image that the generated output is compared to the same. The result is that the trained model produces more diverse images in the same distribution when using Conditioning Augmentation than the same model using a fixed text embedding BIBREF33."
],
[
"Proposed by the same users as StackGAN, StackGAN++ is also a stacked GAN model, but organizes the generators and discriminators in a “tree-like” structure BIBREF47 with multiple stages. The first stage combines a noise vector and conditioning variables (with Conditional Augmentation introduced in BIBREF33) for input to the first generator, which generates a low-resolution image, 64$\\times $64 by default (this can be changed depending on the desired number of stages). Each following stage uses the result from the previous stage and the conditioning variables to produce gradually higher-resolution images. These stages do not use the noise vector again, as the creators assume that the randomness it introduces is already preserved in the output of the first stage. The final stage produces a 256$\\times $256 high-quality image.",
"StackGAN++ introduces the joint conditional and unconditional approximation in their designs BIBREF47. The discriminators are trained to calculate the loss between the image produced by the generator and the conditioning variables (measuring how accurately the image represents the description) as well as the loss between the image and real images (probability of the image being real or fake). The generators then aim to minimize the sum of these losses, improving the final result."
],
[
"Attentional Generative Adversarial Network (AttnGAN) BIBREF10 is very similar, in terms of its structure, to StackGAN++ BIBREF47, discussed in the previous section, but some novel components are added. Like previous works BIBREF56, BIBREF8, BIBREF33, BIBREF47, a text encoder generates a text embedding with conditioning variables based on the overall sentence. Additionally, the text encoder generates a separate text embedding with conditioning variables based on individual words. This process is optimized to produce meaningful variables using a bidirectional recurrent neural network (BRNN), more specifically bidirectional Long Short Term Memory (LSTM) BIBREF57, which, for each word in the description, generates conditions based on the previous word as well as the next word (bidirectional). The first stage of AttnGAN generates a low-resolution image based on the sentence-level text embedding and random noise vector. The output is fed along with the word-level text embedding to an “attention model”, which matches the word-level conditioning variables to regions of the stage I image, producing a word-context matrix. This is then fed to the next stage of the model along with the raw previous stage output. Each consecutive stage works in the same manner, but produces gradually higher-resolution images conditioned on the previous stage.",
"Two major contributions were introduced in AttnGAN: the attentional generative network and the Deep Attentional Multimodal Similarity Model (DAMSM) BIBREF47. The attentional generative network matches specific regions of each stage's output image to conditioning variables from the word-level text embedding. This is a very worthy contribution, allowing each consecutive stage to focus on specific regions of the image independently, adding “attentional” details region by region as opposed to the whole image. The DAMSM is also a key feature introduced by AttnGAN, which is used after the result of the final stage to calculate the similarity between the generated image and the text embedding at both the sentence level and the more fine-grained word level. Table TABREF48 shows scores from different metrics for StackGAN, StackGAN++, AttnGAN, and HDGAN on the CUB, Oxford, and COCO datasets. The table shows that AttnGAN outperforms the other models in terms of IS on the CUB dataset by a small amount and greatly outperforms them on the COCO dataset."
],
[
"Hierarchically-nested adversarial network (HDGAN) is a method proposed by BIBREF36, and its main objective is to tackle the difficult problem of dealing with photographic images from semantic text descriptions. These semantic text descriptions are applied on images from diverse datasets. This method introduces adversarial objectives nested inside hierarchically oriented networks BIBREF36. Hierarchical networks helps regularize mid-level manifestations. In addition to regularize mid-level manifestations, it assists the training of the generator in order to capture highly complex still media elements. These elements are captured in statistical order to train the generator based on settings extracted directly from the image. The latter is an ideal scenario. However, this paper aims to incorporate a single-stream architecture. This single-stream architecture functions as the generator that will form an optimum adaptability towards the jointed discriminators. Once jointed discriminators are setup in an optimum manner, the single-stream architecture will then advance generated images to achieve a much higher resolution BIBREF36.",
"The main contributions of the HDGANs include the introduction of a visual-semantic similarity measure BIBREF36. This feature will aid in the evaluation of the consistency of generated images. In addition to checking the consistency of generated images, one of the key objectives of this step is to test the logical consistency of the end product BIBREF36. The end product in this case would be images that are semantically mapped from text-based natural language descriptions to each area on the picture e.g. a wing on a bird or petal on a flower. Deep learning has created a multitude of opportunities and challenges for researchers in the computer vision AI field. Coupled with GAN and multimodal learning architectures, this field has seen tremendous growth BIBREF8, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. Based on these advancements, HDGANs attempt to further extend some desirable and less common features when generating images from textual natural language BIBREF36. In other words, it takes sentences and treats them as a hierarchical structure. This has some positive and negative implications in most cases. For starters, it makes it more complex to generate compelling images. However, one of the key benefits of this elaborate process is the realism obtained once all processes are completed. In addition, one common feature added to this process is the ability to identify parts of sentences with bounding boxes. If a sentence includes common characteristics of a bird, it will surround the attributes of such bird with bounding boxes. In practice, this should happen if the desired image have other elements such as human faces (e.g. eyes, hair, etc), flowers (e.g. petal size, color, etc), or any other inanimate object (e.g. a table, a mug, etc). Finally, HDGANs evaluated some of its claims on common ideal text-to-image datasets such as CUB, COCO, and Oxford-102 BIBREF8, BIBREF36, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. These datasets were first utilized on earlier works BIBREF8, and most of them sport modified features such image annotations, labels, or descriptions. The qualitative and quantitative results reported by researchers in this study were far superior of earlier works in this same field of computer vision AI.",
"black"
],
[
"In this subsection, we introduce text-to-image synthesis methods which try to maximize the diversity of the output images, based on the text descriptions.",
"black"
],
[
"Two issues arise in the traditional GANs BIBREF58 for image synthesis: (1) scalabilirty problem: traditional GANs cannot predict a large number of image categories; and (2) diversity problem: images are often subject to one-to-many mapping, so one image could be labeled as different tags or being described using different texts. To address these problems, GAN conditioned on additional information, e.g. cGAN, is an alternative solution. However, although cGAN and many previously introduced approaches are able to generate images with respect to the text descriptions, they often output images with similar types and visual appearance.",
"black Slightly different from the cGAN, auxiliary classifier GANs (AC-GAN) BIBREF27 proposes to improve the diversity of output images by using an auxiliary classifier to control output images. The overall structure of AC-GAN is shown in Fig. FIGREF15(c). In AC-GAN, every generated image is associated with a class label, in addition to the true/fake label which are commonly used in GAN or cGAN. The discriminator of AC-GAN not only outputs a probability distribution over sources (i.e. whether the image is true or fake), it also output a probability distribution over the class label (i.e. predict which class the image belong to).",
"black By using an auxiliary classifier layer to predict the class of the image, AC-GAN is able to use the predicted class labels of the images to ensure that the output consists of images from different classes, resulting in diversified synthesis images. The results show that AC-GAN can generate images with high diversity.",
"black"
],
[
"Building on the AC-GAN, TAC-GAN BIBREF59 is proposed to replace the class information with textual descriptions as the input to perform the task of text to image synthesis. The architecture of TAC-GAN is shown in Fig. FIGREF15(d), which is similar to AC-GAN. Overall, the major difference between TAC-GAN and AC-GAN is that TAC-GAN conditions the generated images on text descriptions instead of on a class label. This design makes TAC-GAN more generic for image synthesis.",
"black For TAC-GAN, it imposes restrictions on generated images in both texts and class labels. The input vector of TAC-GAN's generative network is built based on a noise vector and embedded vector representation of textual descriptions. The discriminator of TAC-GAN is similar to that of the AC-GAN, which not only predicts whether the image is fake or not, but also predicts the label of the images. A minor difference of TAC-GAN's discriminator, compared to that of the AC-GAN, is that it also receives text information as input before performing its classification.",
"black The experiments and validations, on the Oxford-102 flowers dataset, show that the results produced by TAC-GAN are “slightly better” that other approaches, including GAN-INT-CLS and StackGAN.",
"black"
],
[
"In order to improve the diversity of the output images, both AC-GAN and TAC-GAN's discriminators predict class labels of the synthesised images. This process likely enforces the semantic diversity of the images, but class labels are inherently restrictive in describing image semantics, and images described by text can be matched to multiple labels. Therefore, instead of predicting images' class labels, an alternative solution is to directly quantify their semantic relevance.",
"black The architecture of Text-SeGAN is shown in Fig. FIGREF15(e). In order to directly quantify semantic relevance, Text-SeGAN BIBREF28 adds a regression layer to estimate the semantic relevance between the image and text instead of a classifier layer of predicting labels. The estimated semantic reference is a fractional value ranging between 0 and 1, with a higher value reflecting better semantic relevance between the image and text. Due to this unique design, an inherent advantage of Text-SeGAN is that the generated images are not limited to certain classes and are semantically matching to the text input.",
"black Experiments and validations, on Oxford-102 flower dataset, show that Text-SeGAN can generate diverse images that are semantically relevant to the input text. In addition, the results of Text-SeGAN show improved inception score compared to other approaches, including GAN-INT-CLS, StackGAN, TAC-GAN, and HDGAN.",
"black"
],
[
"Due to the inherent complexity of the visual images, and the diversity of text descriptions (i.e. same words could imply different meanings), it is difficulty to precisely match the texts to the visual images at the semantic levels. For most methods we have discussed so far, they employ a direct text to image generation process, but there is no validation about how generated images comply with the text in a reverse fashion.",
"black To ensure the semantic consistency and diversity, MirrorGAN BIBREF60 employs a mirror structure, which reversely learns from generated images to output texts (an image-to-text process) to further validate whether generated are indeed consistent to the input texts. MirrowGAN includes three modules: a semantic text embedding module (STEM), a global-local collaborative attentive module for cascaded image generation (GLAM), and a semantic text regeneration and alignment module (STREAM). The back to back Text-to-Image (T2I) and Image-to-Text (I2T) are combined to progressively enhance the diversity and semantic consistency of the generated images.",
"black In order to enhance the diversity of the output image, Scene Graph GAN BIBREF61 proposes to use visual scene graphs to describe the layout of the objects, allowing users to precisely specific the relationships between objects in the images. In order to convert the visual scene graph as input for GAN to generate images, this method uses graph convolution to process input graphs. It computes a scene layout by predicting bounding boxes and segmentation masks for objects. After that, it converts the computed layout to an image with a cascaded refinement network.",
"black"
],
[
"Instead of focusing on generating static images, another line of text-to-image synthesis research focuses on generating videos (i.e. sequences of images) from texts. In this context, the synthesised videos are often useful resources for automated assistance or story telling.",
"black"
],
[
"One early/interesting work of motion enhancement GANs is to generate spoofed speech and lip-sync videos (or talking face) of Barack Obama (i.e. ObamaNet) based on text input BIBREF62. This framework is consisted of three parts, i.e. text to speech using “Char2Wav”, mouth shape representation synced to the audio using a time-delayed LSTM and “video generation” conditioned on the mouth shape using “U-Net” architecture. Although the results seem promising, ObamaNet only models the mouth region and the videos are not generated from noise which can be regarded as video prediction other than video generation.",
"black Another meaningful trial of using synthesised videos for automated assistance is to translate spoken language (e.g. text) into sign language video sequences (i.e. T2S) BIBREF63. This is often achieved through a two step process: converting texts as meaningful units to generate images, followed by a learning component to arrange images into sequential order for best representation. More specifically, using RNN based machine translation methods, texts are translated into sign language gloss sequences. Then, glosses are mapped to skeletal pose sequences using a lookup-table. To generate videos, a conditional DCGAN with the input of concatenation of latent representation of the image for a base pose and skeletal pose information is built.",
"black"
],
[
"In BIBREF64, a text-to-video model (T2V) is proposed based on the cGAN in which the input is the isometric Gaussian noise with the text-gist vector served as the generator. A key component of generating videos from text is to train a conditional generative model to extract both static and dynamic information from text, followed by a hybrid framework combining a Variational Autoencoder (VAE) and a Generative Adversarial Network (GAN).",
"black More specifically, T2V relies on two types of features, static features and dynamic features, to generate videos. Static features, called “gist” are used to sketch text-conditioned background color and object layout structure. Dynamic features, on the other hand, are considered by transforming input text into an image filter which eventually forms the video generator which consists of three entangled neural networks. The text-gist vector is generated by a gist generator which maintains static information (e.g. background) and a text2filter which captures the dynamic information (i.e. actions) in the text to generate videos.",
"black As demonstrated in the paper BIBREF64, the generated videos are semantically related to the texts, but have a rather low quality (e.g. only $64 \\times 64$ resolution).",
"black"
],
[
"Different from T2V which generates videos from a single text, StoryGAN aims to produce dynamic scenes consistent of specified texts (i.e. story written in a multi-sentence paragraph) using a sequential GAN model BIBREF65. Story encoder, context encoder, and discriminators are the main components of this model. By using stochastic sampling, the story encoder intends to learn an low-dimensional embedding vector for the whole story to keep the continuity of the story. The context encoder is proposed to capture contextual information during sequential image generation based on a deep RNN. Two discriminators of StoryGAN are image discriminator which evaluates the generated images and story discriminator which ensures the global consistency.",
"black The experiments and comparisons, on CLEVR dataset and Pororo cartoon dataset which are originally used for visual question answering, show that StoryGAN improves the generated video qualify in terms of Structural Similarity Index (SSIM), visual qualify, consistence, and relevance (the last three measure are based on human evaluation)."
],
[
"Computer vision applications have strong potential for industries including but not limited to the medical, government, military, entertainment, and online social media fields BIBREF7, BIBREF66, BIBREF67, BIBREF68, BIBREF69, BIBREF70. Text-to-image synthesis is one such application in computer vision AI that has become the main focus in recent years due to its potential for providing beneficial properties and opportunities for a wide range of applicable areas.",
"Text-to-image synthesis is an application byproduct of deep convolutional decoder networks in combination with GANs BIBREF7, BIBREF8, BIBREF10. Deep convolutional networks have contributed to several breakthroughs in image, video, speech, and audio processing. This learning method intends, among other possibilities, to help translate sequential text descriptions to images supplemented by one or many additional methods. Algorithms and methods developed in the computer vision field have allowed researchers in recent years to create realistic images from plain sentences. Advances in the computer vision, deep convolutional nets, and semantic units have shined light and redirected focus to this research area of text-to-image synthesis, having as its prime directive: to aid in the generation of compelling images with as much fidelity to text descriptions as possible.",
"To date, models for generating synthetic images from textual natural language in research laboratories at universities and private companies have yielded compelling images of flowers and birds BIBREF8. Though flowers and birds are the most common objects studied thus far, research has been applied to other classes as well. For example, there have been studies focused solely on human faces BIBREF7, BIBREF8, BIBREF71, BIBREF72.",
"It’s a fascinating time for computer vision AI and deep learning researchers and enthusiasts. The consistent advancement in hardware, software, and contemporaneous development of computer vision AI research disrupts multiple industries. These advances in technology allow for the extraction of several data types from a variety of sources. For example, image data captured from a variety of photo-ready devices, such as smart-phones, and online social media services opened the door to the analysis of large amounts of media datasets BIBREF70. The availability of large media datasets allow new frameworks and algorithms to be proposed and tested on real-world data."
],
[
"A summary of some reviewed methods and benchmark datasets used for validation is reported in Table TABREF43. In addition, the performance of different GANs with respect to the benchmark datasets and performance metrics is reported in Table TABREF48.",
"In order to synthesize images from text descriptions, many frameworks have taken a minimalistic approach by creating small and background-less images BIBREF73. In most cases, the experiments were conducted on simple datasets, initially containing images of birds and flowers. BIBREF8 contributed to these data sets by adding corresponding natural language text descriptions to subsets of the CUB, MSCOCO, and Oxford-102 datasets, which facilitated the work on text-to-image synthesis for several papers released more recently.",
"While most deep learning algorithms use MNIST BIBREF74 dataset as the benchmark, there are three main datasets that are commonly used for evaluation of proposed GAN models for text-to-image synthesis: CUB BIBREF75, Oxford BIBREF76, COCO BIBREF77, and CIFAR-10 BIBREF78. CUB BIBREF75 contains 200 birds with matching text descriptions and Oxford BIBREF76 contains 102 categories of flowers with 40-258 images each and matching text descriptions. These datasets contain individual objects, with the text description corresponding to that object, making them relatively simple. COCO BIBREF77 is much more complex, containing 328k images with 91 different object types. CIFAI-10 BIBREF78 dataset consists of 60000 32$times$32 colour images in 10 classes, with 6000 images per class. In contrast to CUB and Oxford, whose images each contain an individual object, COCO’s images may contain multiple objects, each with a label, so there are many labels per image. The total number of labels over the 328k images is 2.5 million BIBREF77."
],
[
"Several evaluation metrics are used for judging the images produced by text-to-image GANs. Proposed by BIBREF25, Inception Scores (IS) calculates the entropy (randomness) of the conditional distribution, obtained by applying the Inception Model introduced in BIBREF79, and marginal distribution of a large set of generated images, which should be low and high, respectively, for meaningful images. Low entropy of conditional distribution means that the evaluator is confident that the images came from the data distribution, and high entropy of the marginal distribution means that the set of generated images is diverse, which are both desired features. The IS score is then computed as the KL-divergence between the two entropies. FCN-scores BIBREF2 are computed in a similar manner, relying on the intuition that realistic images generated by a GAN should be able to be classified correctly by a classifier trained on real images of the same distribution. Therefore, if the FCN classifier classifies a set of synthetic images accurately, the image is probably realistic, and the corresponding GAN gets a high FCN score. Frechet Inception Distance (FID) BIBREF80 is the other commonly used evaluation metric, and takes a different approach, actually comparing the generated images to real images in the distribution. A high FID means there is little relationship between statistics of the synthetic and real images and vice versa, so lower FIDs are better.",
"black The performance of different GANs with respect to the benchmark datasets and performance metrics is reported in Table TABREF48. In addition, Figure FIGREF49 further lists the performance of 14 GANs with respect to their Inception Scores (IS)."
],
[
"While we gathered all the data we could find on scores for each model on the CUB, Oxford, and COCO datasets using IS, FID, FCN, and human classifiers, we unfortunately were unable to find certain data for AttnGAN and HDGAN (missing in Table TABREF48). The best evaluation we can give for those with missing data is our own opinions by looking at examples of generated images provided in their papers. In this regard, we observed that HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset. This is evidence that the attentional model and DAMSM introduced by AttnGAN are very effective in producing high-quality images. Examples of the best results of birds and plates of vegetables generated by each model are presented in Figures FIGREF50 and FIGREF51, respectively.",
"blackIn terms of inception score (IS), which is the metric that was applied to majority models except DC-GAN, the results in Table TABREF48 show that StackGAN++ only showed slight improvement over its predecessor, StackGAN, for text-to-image synthesis. However, StackGAN++ did introduce a very worthy enhancement for unconditional image generation by organizing the generators and discriminators in a “tree-like” structure. This indicates that revising the structures of the discriminators and/or generators can bring a moderate level of improvement in text-to-image synthesis.",
"blackIn addition, the results in Table TABREF48 also show that DM-GAN BIBREF53 has the best performance, followed by Obj-GAN BIBREF81. Notice that both DM-GAN and Obj-GAN are most recently developed methods in the field (both published in 2019), indicating that research in text to image synthesis is continuously improving the results for better visual perception and interception. Technical wise, DM-GAN BIBREF53 is a model using dynamic memory to refine fuzzy image contents initially generated from the GAN networks. A memory writing gate is used for DM-GAN to select important text information and generate images based on he selected text accordingly. On the other hand, Obj-GAN BIBREF81 focuses on object centered text-to-image synthesis. The proposed framework of Obj-GAN consists of a layout generation, including a bounding box generator and a shape generator, and an object-driven attentive image generator. The designs and advancement of DM-GAN and Obj-GAN indicate that research in text-to-image synthesis is advancing to put more emphasis on the image details and text semantics for better understanding and perception."
],
[
"It is worth noting that although this survey mainly focuses on text-to-image synthesis, there have been other applications of GANs in broader image synthesis field that we found fascinating and worth dedicating a small section to. For example, BIBREF72 used Sem-Latent GANs to generate images of faces based on facial attributes, producing impressive results that, at a glance, could be mistaken for real faces. BIBREF82, BIBREF70, and BIBREF83 demonstrated great success in generating text descriptions from images (image captioning) with great accuracy, with BIBREF82 using an attention-based model that automatically learns to focus on salient objects and BIBREF83 using deep visual-semantic alignments. Finally, there is a contribution made by StackGAN++ that was not mentioned in the dedicated section due to its relation to unconditional image generation as opposed to conditional, namely a color-regularization term BIBREF47. This additional term aims to keep the samples generated from the same input at different stages more consistent in color, which resulted in significantly better results for the unconditional model."
],
[
"The recent advancement in text-to-image synthesis research opens the door to several compelling methods and architectures. The main objective of text-to-image synthesis initially was to create images from simple labels, and this objective later scaled to natural languages. In this paper, we reviewed novel methods that generate, in our opinion, the most visually-rich and photo-realistic images, from text-based natural language. These generated images often rely on generative adversarial networks (GANs), deep convolutional decoder networks, and multimodal learning methods.",
"blackIn the paper, we first proposed a taxonomy to organize GAN based text-to-image synthesis frameworks into four major groups: semantic enhancement GANs, resolution enhancement GANs, diversity enhancement GANs, and motion enhancement GANs. The taxonomy provides a clear roadmap to show the motivations, architectures, and difference of different methods, and also outlines their evolution timeline and relationships. Following the proposed taxonomy, we reviewed important features of each method and their architectures. We indicated the model definition and key contributions from some advanced GAN framworks, including StackGAN, StackGAN++, AttnGAN, DC-GAN, AC-GAN, TAC-GAN, HDGAN, Text-SeGAn, StoryGAN etc. Many of the solutions surveyed in this paper tackled the highly complex challenge of generating photo-realistic images beyond swatch size samples. In other words, beyond the work of BIBREF8 in which images were generated from text in 64$\\times $64 tiny swatches. Lastly, all methods were evaluated on datasets that included birds, flowers, humans, and other miscellaneous elements. We were also able to allocate some important papers that were as impressive as the papers we finally surveyed. Though, these notable papers have yet to contribute directly or indirectly to the expansion of the vast computer vision AI field. Looking into the future, an excellent extension from the works surveyed in this paper would be to give more independence to the several learning methods (e.g. less human intervention) involved in the studies as well as increasing the size of the output images."
],
[
"The authors declare that there is no conflict of interest regarding the publication of this article."
]
]
} | {
"question": [
"Is text-to-image synthesis trained is suppervized or unsuppervized manner?",
"What challenges remain unresolved?",
"What is the conclusion of comparison of proposed solution?",
"What is typical GAN architecture for each text-to-image synhesis group?"
],
"question_id": [
"e96adf8466e67bd19f345578d5a6dc68fd0279a1",
"c1477a6c86bd1670dd17407590948000c9a6b7c6",
"e020677261d739c35c6f075cde6937d0098ace7f",
"6389d5a152151fb05aae00b53b521c117d7b5e54"
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"unsupervised "
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Following the above definition, the $\\min \\max $ objective function in Eq. (DISPLAY_FORM10) aims to learn parameters for the discriminator ($\\theta _d$) and generator ($\\theta _g$) to reach an optimization goal: The discriminator intends to differentiate true vs. fake images with maximum capability $\\max _{\\theta _d}$ whereas the generator intends to minimize the difference between a fake image vs. a true image $\\min _{\\theta _g}$. In other words, the discriminator sets the characteristics and the generator produces elements, often images, iteratively until it meets the attributes set forth by the discriminator. GANs are often used with images and other visual elements and are notoriously efficient in generating compelling and convincing photorealistic images. Most recently, GANs were used to generate an original painting in an unsupervised fashion BIBREF24. The following sections go into further detail regarding how the generator and discriminator are trained in GANs."
],
"highlighted_evidence": [
"Most recently, GANs were used to generate an original painting in an unsupervised fashion BIBREF24. "
]
},
{
"unanswerable": false,
"extractive_spans": [
"Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning, unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems: text-based natural language and image synthesis"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"black Deep learning shed some light to some of the most sophisticated advances in natural language representation, image synthesis BIBREF7, BIBREF8, BIBREF43, BIBREF35, and classification of generic data BIBREF44. However, a bulk of the latest breakthroughs in deep learning and computer vision were related to supervised learning BIBREF8. Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning, unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems: text-based natural language and image synthesis BIBREF45, BIBREF14, BIBREF8, BIBREF46, BIBREF47. These subproblems are typically subdivided as focused research areas. DC-GAN's contributions are mainly driven by these two research areas. In order to generate plausible images from natural language, DC-GAN contributions revolve around developing a straightforward yet effective GAN architecture and training strategy that allows natural text to image synthesis. These contributions are primarily tested on the Caltech-UCSD Birds and Oxford-102 Flowers datasets. Each image in these datasets carry five text descriptions. These text descriptions were created by the research team when setting up the evaluation environment. The DC-GANs model is subsequently trained on several subcategories. Subcategories in this research represent the training and testing sub datasets. The performance shown by these experiments display a promising yet effective way to generate images from textual natural language descriptions BIBREF8."
],
"highlighted_evidence": [
"Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning, unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems: text-based natural language and image synthesis BIBREF45, BIBREF14, BIBREF8, BIBREF46, BIBREF47."
]
}
],
"annotation_id": [
"45a2b7dc749c642c3ed415dd5a44202ad8b6ac61",
"b4fc38fa3c0347286c4cae9d60f5bb527cf6ae85"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"give more independence to the several learning methods (e.g. less human intervention) involved in the studies",
"increasing the size of the output images"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"blackIn the paper, we first proposed a taxonomy to organize GAN based text-to-image synthesis frameworks into four major groups: semantic enhancement GANs, resolution enhancement GANs, diversity enhancement GANs, and motion enhancement GANs. The taxonomy provides a clear roadmap to show the motivations, architectures, and difference of different methods, and also outlines their evolution timeline and relationships. Following the proposed taxonomy, we reviewed important features of each method and their architectures. We indicated the model definition and key contributions from some advanced GAN framworks, including StackGAN, StackGAN++, AttnGAN, DC-GAN, AC-GAN, TAC-GAN, HDGAN, Text-SeGAn, StoryGAN etc. Many of the solutions surveyed in this paper tackled the highly complex challenge of generating photo-realistic images beyond swatch size samples. In other words, beyond the work of BIBREF8 in which images were generated from text in 64$\\times $64 tiny swatches. Lastly, all methods were evaluated on datasets that included birds, flowers, humans, and other miscellaneous elements. We were also able to allocate some important papers that were as impressive as the papers we finally surveyed. Though, these notable papers have yet to contribute directly or indirectly to the expansion of the vast computer vision AI field. Looking into the future, an excellent extension from the works surveyed in this paper would be to give more independence to the several learning methods (e.g. less human intervention) involved in the studies as well as increasing the size of the output images."
],
"highlighted_evidence": [
"Looking into the future, an excellent extension from the works surveyed in this paper would be to give more independence to the several learning methods (e.g. less human intervention) involved in the studies as well as increasing the size of the output images."
]
}
],
"annotation_id": [
"31015e42a831e288126a933eac9521a9e04d65d0"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset",
"In terms of inception score (IS), which is the metric that was applied to majority models except DC-GAN, the results in Table TABREF48 show that StackGAN++ only showed slight improvement over its predecessor",
"text to image synthesis is continuously improving the results for better visual perception and interception"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"While we gathered all the data we could find on scores for each model on the CUB, Oxford, and COCO datasets using IS, FID, FCN, and human classifiers, we unfortunately were unable to find certain data for AttnGAN and HDGAN (missing in Table TABREF48). The best evaluation we can give for those with missing data is our own opinions by looking at examples of generated images provided in their papers. In this regard, we observed that HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset. This is evidence that the attentional model and DAMSM introduced by AttnGAN are very effective in producing high-quality images. Examples of the best results of birds and plates of vegetables generated by each model are presented in Figures FIGREF50 and FIGREF51, respectively.",
"blackIn terms of inception score (IS), which is the metric that was applied to majority models except DC-GAN, the results in Table TABREF48 show that StackGAN++ only showed slight improvement over its predecessor, StackGAN, for text-to-image synthesis. However, StackGAN++ did introduce a very worthy enhancement for unconditional image generation by organizing the generators and discriminators in a “tree-like” structure. This indicates that revising the structures of the discriminators and/or generators can bring a moderate level of improvement in text-to-image synthesis.",
"blackIn addition, the results in Table TABREF48 also show that DM-GAN BIBREF53 has the best performance, followed by Obj-GAN BIBREF81. Notice that both DM-GAN and Obj-GAN are most recently developed methods in the field (both published in 2019), indicating that research in text to image synthesis is continuously improving the results for better visual perception and interception. Technical wise, DM-GAN BIBREF53 is a model using dynamic memory to refine fuzzy image contents initially generated from the GAN networks. A memory writing gate is used for DM-GAN to select important text information and generate images based on he selected text accordingly. On the other hand, Obj-GAN BIBREF81 focuses on object centered text-to-image synthesis. The proposed framework of Obj-GAN consists of a layout generation, including a bounding box generator and a shape generator, and an object-driven attentive image generator. The designs and advancement of DM-GAN and Obj-GAN indicate that research in text-to-image synthesis is advancing to put more emphasis on the image details and text semantics for better understanding and perception."
],
"highlighted_evidence": [
"In this regard, we observed that HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset.",
"In terms of inception score (IS), which is the metric that was applied to majority models except DC-GAN, the results in Table TABREF48 show that StackGAN++ only showed slight improvement over its predecessor, StackGAN, for text-to-image synthesis.",
"In addition, the results in Table TABREF48 also show that DM-GAN BIBREF53 has the best performance, followed by Obj-GAN BIBREF81. Notice that both DM-GAN and Obj-GAN are most recently developed methods in the field (both published in 2019), indicating that research in text to image synthesis is continuously improving the results for better visual perception and interception."
]
}
],
"annotation_id": [
"ddd78b6aa4dc2e986a9b1ab93331c47e29896f01"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Semantic Enhancement GANs: DC-GANs, MC-GAN\nResolution Enhancement GANs: StackGANs, AttnGAN, HDGAN\nDiversity Enhancement GANs: AC-GAN, TAC-GAN etc.\nMotion Enhancement GAGs: T2S, T2V, StoryGAN",
"evidence": [
"In this section, we propose a taxonomy to summarize advanced GAN based text-to-image synthesis frameworks, as shown in Figure FIGREF24. The taxonomy organizes GAN frameworks into four categories, including Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANs, and Motion Enhancement GAGs. Following the proposed taxonomy, each subsection will introduce several typical frameworks and address their techniques of using GANS to solve certain aspects of the text-to-mage synthesis challenges.",
"FLOAT SELECTED: Figure 9. A Taxonomy and categorization of advanced GAN frameworks for Text-to-Image Synthesis. We categorize advanced GAN frameworks into four major categories: Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANs, and Motion Enhancement GAGs. The relationship between relevant frameworks and their publication date are also outlined as a reference."
],
"highlighted_evidence": [
"In this section, we propose a taxonomy to summarize advanced GAN based text-to-image synthesis frameworks, as shown in Figure FIGREF24.",
"FLOAT SELECTED: Figure 9. A Taxonomy and categorization of advanced GAN frameworks for Text-to-Image Synthesis. We categorize advanced GAN frameworks into four major categories: Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANs, and Motion Enhancement GAGs. The relationship between relevant frameworks and their publication date are also outlined as a reference."
]
}
],
"annotation_id": [
"f9a6b735c8b98ce2874c4eb5e4a122b468b6a66d"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Figure 1. Early research on text-to-image synthesis (Zhu et al., 2007). The system uses correlation between keywords (or keyphrase) and images and identifies informative and “picturable” text units, then searches for the most likely image parts conditioned on the text, and eventually optimizes the picture layout conditioned on both the text and image parts.",
"Figure 2. Supervised learning based text-to-image synthesis (Yan et al., 2016a). The supervised learning process aims to learn layered generative models to generate visual content. Because the learning is customized/conditioned by the given attributes, the generative models of Attribute2Image can generative images with respect to different attributes, such as hair color, age, etc.",
"Figure 3. Generative adversarial neural network (GAN) based text-to-image synthesis (Huang et al., 2018). GAN based text-to-image synthesis combines discriminative and generative learning to train neural networks resulting in the generated images semantically resemble to the training samples or tailored to a subset of training images (i.e. conditioned outputs). ϕ() is a feature embedding function, which converts text as feature vector. z is a latent vector following normal distributions with zero mean. x̂ = G(z,ϕ(t) denotes a synthetic image generated from the generator, using latent vector z and the text features ϕ(t) as the input. D(x̂,ϕ(t)) denotes the prediction of the discriminator based on the input x̂ the generated image and ϕ(t) text information of the generated image. The explanations about the generators and discriminators are detailed in Section 3.1.",
"Figure 4. A visual summary of GAN based text-to-image (T2I) synthesis process, and the summary of GAN based frameworks/methods reviewed in the survey.",
"Figure 5. A conceptual view of the GenerativeAdversarial Network (GAN) architecture. The Generator G(z) is trained to generate synthetic/fake resemble to real samples, from a random noise distribution. The fake samples are fed to the Discriminator D(x) along with real samples. The Discriminator is trained to differentiate fake samples from real samples. The iterative training of the generator and the discriminator helps GAN deliver good generator generating samples very close to the underlying training samples.",
"Figure 6. A conceptual view of the conditional GAN architecture. The Generator G(z|y) generates samples from a random noise distribution and some condition vector (in this case text). The fake samples are fed to the Discriminator D(x|y) along with real samples and the same condition vector, and the Discriminator calculates the probability that the fake sample came from the real data distribution.",
"Figure 7. A simple architecture comparisons between five GAN networks for text-to-image synthesis. This figure also explains how texts are fed as input to train GAN to generate images. (a) Conditional GAN (cGAN) (Mirza and Osindero, 2014a) use labels to condition the input to the generator and the discriminator. The final output is discriminator similar to generic GAN; (b) Manifold interpolation matchingaware discriminator GAN (GAN-INT-CLS) (Reed et al., 2016b) feeds text input to both generator and discriminator (texts are preprocessed as embedding features, using function ϕ(), and concatenated with other input, before feeding to both generator and discriminator). The final output is discriminator similar to generic GAN; (c) Auxiliary classifier GAN (AC-GAN) (Odena et al., 2017b) uses an auxiliary classifier layer to predict the class of the image to ensure that the output consists of images from different classes, resulting in diversified synthesis images; (d) text conditioned auxiliary classifier GAN (TACGAN) (Dash et al., 2017a) share similar design as GAN-INT-CLS, whereas the output include both a discriminator and a classifier (which can be used for classification); and (e) text conditioned semantic classifier GAN (Text-SeGAN) (Cha et al., 2019a) uses a regression layer to estimate the semantic relevance between the image, so the generated images are not limited to certain classes and are semantically matching to the text input.",
"Figure 8. A high level comparison of several advanced GANs framework for text-to-image synthesis. All frameworks take text (red triangle) as input and generate output images. From left to right, (A) uses multiple discriminators and one generator (Durugkar et al., 2017; Nguyen et al., 2017), (B) uses multiple stage GANs where the output from one GAN is fed to the next GAN as input (Zhang et al., 2017b; Denton et al., 2015b), (C) progressively trains symmetric discriminators and generators (Huang et al., 2017), and (D) uses a single-stream generator with a hierarchically-nested discriminator trained from end-to-end (Zhang et al., 2018d).",
"Figure 9. A Taxonomy and categorization of advanced GAN frameworks for Text-to-Image Synthesis. We categorize advanced GAN frameworks into four major categories: Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANs, and Motion Enhancement GAGs. The relationship between relevant frameworks and their publication date are also outlined as a reference.",
"Table 1. A summary of different GANs and datasets used for validation. AX symbol indicates that the model was evaluated using the corresponding dataset",
"Table 2. A summary of performance of different methods with respect to the three benchmark datasets and four performancemetrics: Inception Score (IS), Frechet Inception Distance (FID), Human Classifier (HC), and SSIM scores. The generative adversarial networks inlcude DCGAN, GAN-INT-CLS, DongGAN, Paired-D-GAN, StackGAN, StackGAN++, AttnGAN, ObjGAN,HDGAN, DM-GAN, TAC-GAN, Text-SeGAN, Scene Graph GAN, and MirrorGAN. The three benchmark datasets include CUB, Oxford, and COCO datasets. A dash indicates that no data was found.",
"Figure 10. Performance comparison between 14 GANs with respect to their Inception Scores (IS).",
"Figure 11. Examples of best images of “birds” generated by GAN-INT-CLS, StackGAN, StackGAN++, AttnGAN, and HDGAN. Images reprinted from Zhang et al. (2017b,b, 2018b); Xu et al. (2017), and Zhang et al. (2018d), respectively.",
"Figure 12. Examples of best images of “a plate of vegetables” generated by GAN-INT-CLS, StackGAN, StackGAN++, AttnGAN, and HDGAN. Images reprinted from Zhang et al. (2017b,b, 2018b); Xu et al. (2017), and Zhang et al. (2018d), respectively."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png",
"7-Figure5-1.png",
"8-Figure6-1.png",
"9-Figure7-1.png",
"10-Figure8-1.png",
"12-Figure9-1.png",
"18-Table1-1.png",
"20-Table2-1.png",
"21-Figure10-1.png",
"21-Figure11-1.png",
"22-Figure12-1.png"
]
} |
1904.05584 | Gating Mechanisms for Combining Character and Word-level Word Representations: An Empirical Study | In this paper we study how different ways of combining character and word-level representations affect the quality of both final word and sentence representations. We provide strong empirical evidence that modeling characters improves the learned representations at the word and sentence levels, and that doing so is particularly useful when representing less frequent words. We further show that a feature-wise sigmoid gating mechanism is a robust method for creating representations that encode semantic similarity, as it performed reasonably well in several word similarity datasets. Finally, our findings suggest that properly capturing semantic similarity at the word level does not consistently yield improved performance in downstream sentence-level tasks. Our code is available at https://github.com/jabalazs/gating | {
"section_name": [
"Introduction",
"Background",
"Mapping Characters to Character-level Word Representations",
"Combining Character and Word-level Representations",
"Obtaining Sentence Representations",
"Experimental Setup",
"Datasets",
"Word Similarity",
"Word Frequencies and Gating Values",
"Sentence-level Evaluation",
"Relationship Between Word- and Sentence-level Evaluation Tasks",
"Gating Mechanisms for Combining Characters and Word Representations",
"Sentence Representation Learning",
"General Feature-wise Transformations",
"Conclusions",
"Acknowledgements",
"Hyperparameters",
"Sentence Evaluation Datasets"
],
"paragraphs": [
[
"Incorporating sub-word structures like substrings, morphemes and characters to the creation of word representations significantly increases their quality as reflected both by intrinsic metrics and performance in a wide range of downstream tasks BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 .",
"The reason for this improvement is related to sub-word structures containing information that is usually ignored by standard word-level models. Indeed, when representing words as vectors extracted from a lookup table, semantically related words resulting from inflectional processes such as surf, surfing, and surfed, are treated as being independent from one another. Further, word-level embeddings do not account for derivational processes resulting in syntactically-similar words with different meanings such as break, breakable, and unbreakable. This causes derived words, which are usually less frequent, to have lower-quality (or no) vector representations.",
"Previous works have successfully combined character-level and word-level word representations, obtaining overall better results than using only word-level representations. For example BIBREF1 achieved state-of-the-art results in a machine translation task by representing unknown words as a composition of their characters. BIBREF4 created word representations by adding the vector representations of the words' surface forms and their morphemes ( INLINEFORM0 ), obtaining significant improvements on intrinsic evaluation tasks, word similarity and machine translation. BIBREF5 concatenated character-level and word-level representations for creating word representations, and then used them as input to their models for obtaining state-of-the-art results in Named Entity Recognition on several languages.",
"What these works have in common is that the models they describe first learn how to represent subword information, at character BIBREF1 , morpheme BIBREF4 , or substring BIBREF0 levels, and then combine these learned representations at the word level. The incorporation of information at a finer-grained hierarchy results in higher-quality modeling of rare words, morphological processes, and semantics BIBREF6 .",
"There is no consensus, however, on which combination method works better in which case, or how the choice of a combination method affects downstream performance, either measured intrinsically at the word level, or extrinsically at the sentence level.",
"In this paper we aim to provide some intuitions about how the choice of mechanism for combining character-level with word-level representations influences the quality of the final word representations, and the subsequent effect these have in the performance of downstream tasks. Our contributions are as follows:"
],
[
"We are interested in studying different ways of combining word representations, obtained from different hierarchies, into a single word representation. Specifically, we want to study how combining word representations (1) taken directly from a word embedding lookup table, and (2) obtained from a function over the characters composing them, affects the quality of the final word representations.",
"Let INLINEFORM0 be a set, or vocabulary, of words with INLINEFORM1 elements, and INLINEFORM2 a vocabulary of characters with INLINEFORM3 elements. Further, let INLINEFORM4 be a sequence of words, and INLINEFORM5 be the sequence of characters composing INLINEFORM6 . Each token INLINEFORM7 can be represented as a vector INLINEFORM8 extracted directly from an embedding lookup table INLINEFORM9 , pre-trained or otherwise, and as a vector INLINEFORM10 built from the characters that compose it; in other words, INLINEFORM11 , where INLINEFORM12 is a function that maps a sequence of characters to a vector.",
"The methods for combining word and character-level representations we study, are of the form INLINEFORM0 where INLINEFORM1 is the final word representation."
],
[
"The function INLINEFORM0 is composed of an embedding layer, an optional context function, and an aggregation function.",
"The embedding layer transforms each character INLINEFORM0 into a vector INLINEFORM1 of dimension INLINEFORM2 , by directly taking it from a trainable embedding lookup table INLINEFORM3 . We define the matrix representation of word INLINEFORM4 as INLINEFORM5 .",
"The context function takes INLINEFORM0 as input and returns a context-enriched matrix representation INLINEFORM1 , in which each INLINEFORM2 contains a measure of information about its context, and interactions with its neighbors. In particular, we chose to do this by feeding INLINEFORM3 to a BiLSTM BIBREF7 , BIBREF8 .",
"Informally, we can think of LSTM BIBREF10 as a function INLINEFORM0 that takes a matrix INLINEFORM1 as input and returns a context-enriched matrix representation INLINEFORM2 , where each INLINEFORM3 encodes information about the previous elements INLINEFORM4 .",
"A BiLSTM is simply composed of 2 LSTM, one that reads the input from left to right (forward), and another that does so from right to left (backward). The output of the forward and backward LSTM are INLINEFORM0 and INLINEFORM1 respectively. In the backward case the LSTM reads INLINEFORM2 first and INLINEFORM3 last, therefore INLINEFORM4 will encode the context from INLINEFORM5 .",
"The aggregation function takes the context-enriched matrix representation of word INLINEFORM0 for both directions, INLINEFORM1 and INLINEFORM2 , and returns a single vector INLINEFORM3 . To do so we followed BIBREF11 , and defined the character-level representation INLINEFORM4 of word INLINEFORM5 as the linear combination of the forward and backward last hidden states returned by the context function: DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are trainable parameters, and INLINEFORM2 represents the concatenation operation between two vectors."
],
[
"We tested three different methods for combining INLINEFORM0 with INLINEFORM1 : simple concatenation, a learned scalar gate BIBREF11 , and a learned vector gate (also referred to as feature-wise sigmoidal gate). Additionally, we compared these methods to two baselines: using pre-trained word vectors only, and using character-only features for representing words. See fig:methods for a visual description of the proposed methods.",
"word-only (w) considers only INLINEFORM0 and ignores INLINEFORM1 : DISPLAYFORM0 ",
"char-only (c) considers only INLINEFORM0 and ignores INLINEFORM1 : DISPLAYFORM0 ",
"concat (cat) concatenates both word and character-level representations: DISPLAYFORM0 ",
"scalar gate (sg) implements the scalar gating mechanism described by BIBREF11 : DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are trainable parameters, INLINEFORM2 , and INLINEFORM3 is the sigmoid function.",
"vector gate (vg): DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are trainable parameters, INLINEFORM2 , INLINEFORM3 is the element-wise sigmoid function, INLINEFORM4 is the element-wise product for vectors, and INLINEFORM5 is a vector of ones.",
"The vector gate is inspired by BIBREF11 and BIBREF12 , but is different to the former in that the gating mechanism acts upon each dimension of the word and character-level vectors, and different to the latter in that it does not rely on external sources of information for calculating the gating mechanism.",
"Finally, note that word only and char only are special cases of both gating mechanisms: INLINEFORM0 (scalar gate) and INLINEFORM1 (vector gate) correspond to word only; INLINEFORM2 and INLINEFORM3 correspond to char only."
],
[
"To enable sentence-level classification we need to obtain a sentence representation from the word vectors INLINEFORM0 . We achieved this by using a BiLSTM with max pooling, which was shown to be a good universal sentence encoding mechanism BIBREF13 .",
"Let INLINEFORM0 , be an input sentence and INLINEFORM1 its matrix representation, where each INLINEFORM2 was obtained by one of the methods described in subsec:methods. INLINEFORM3 is the context-enriched matrix representation of INLINEFORM4 obtained by feeding INLINEFORM5 to a BiLSTM of output dimension INLINEFORM6 . Lastly, INLINEFORM11 is the final sentence representation of INLINEFORM12 obtained by max-pooling INLINEFORM13 along the sequence dimension.",
"Finally, we initialized the word representations INLINEFORM0 using GloVe embeddings BIBREF14 , and fine-tuned them during training. Refer to app:hyperparams for details on the other hyperparameters we used."
],
[
"We trained our models for solving the Natural Language Inference (NLI) task in two datasets, SNLI BIBREF15 and MultiNLI BIBREF16 , and validated them in each corresponding development set (including the matched and mismatched development sets of MultiNLI).",
"For each dataset-method combination we trained 7 models initialized with different random seeds, and saved each when it reached its best validation accuracy. We then evaluated the quality of each trained model's word representations INLINEFORM0 in 10 word similarity tasks, using the system created by BIBREF17 .",
"Finally, we fed these obtained word vectors to a BiLSTM with max-pooling and evaluated the final sentence representations in 11 downstream transfer tasks BIBREF13 , BIBREF18 ."
],
[
"Word-level Semantic Similarity A desirable property of vector representations of words is that semantically similar words should have similar vector representations. Assessing whether a set of word representations possesses this quality is referred to as the semantic similarity task. This is the most widely-used evaluation method for evaluating word representations, despite its shortcomings BIBREF20 .",
"This task consists of comparing the similarity between word vectors measured by a distance metric (usually cosine distance), with a similarity score obtained from human judgements. High correlation between these similarities is an indicator of good performance.",
"A problem with this formulation though, is that the definition of “similarity” often confounds the meaning of both similarity and relatedness. For example, cup and tea are related but dissimilar words, and this type of distinction is not always clear BIBREF21 , BIBREF22 .",
"To face the previous problem, we tested our methods in a wide variety of datasets, including some that explicitly model relatedness (WS353R), some that explicitly consider similarity (WS353S, SimLex999, SimVerb3500), and some where the distinction is not clear (MEN, MTurk287, MTurk771, RG, WS353). We also included the RareWords (RW) dataset for evaluating the quality of rare word representations. See appendix:datasets for a more complete description of the datasets we used.",
"Sentence-level Evaluation Tasks Unlike word-level representations, there is no consensus on the desirable properties sentence representations should have. In response to this, BIBREF13 created SentEval, a sentence representation evaluation benchmark designed for assessing how well sentence representations perform in various downstream tasks BIBREF23 .",
"Some of the datasets included in SentEval correspond to sentiment classification (CR, MPQA, MR, SST2, and SST5), subjectivity classification (SUBJ), question-type classification (TREC), recognizing textual entailment (SICK E), estimating semantic relatedness (SICK R), and measuring textual semantic similarity (STS16, STSB). The datasets are described by BIBREF13 , and we provide pointers to their original sources in the appendix table:sentence-eval-datasets.",
"To evaluate these sentence representations SentEval trained a linear model on top of them, and evaluated their performance in the validation sets accompanying each dataset. The only exception was the STS16 task, in which our representations were evaluated directly."
],
[
"table:wordlevelresults shows the quality of word representations in terms of the correlation between word similarity scores obtained by the proposed models and word similarity scores defined by humans.",
"First, we can see that for each task, character only models had significantly worse performance than every other model trained on the same dataset. The most likely explanation for this is that these models are the only ones that need to learn word representations from scratch, since they have no access to the global semantic knowledge encoded by the GloVe embeddings.",
"Further, bold results show the overall trend that vector gates outperformed the other methods regardless of training dataset. This implies that learning how to combine character and word-level representations at the dimension level produces word vector representations that capture a notion of word similarity and relatedness that is closer to that of humans.",
"Additionally, results from the MNLI row in general, and underlined results in particular, show that training on MultiNLI produces word representations better at capturing word similarity. This is probably due to MultiNLI data being richer than that of SNLI. Indeed, MultiNLI data was gathered from various sources (novels, reports, letters, and telephone conversations, among others), rather than the single image captions dataset from which SNLI was created.",
"Exceptions to the previous rule are models evaluated in MEN and RW. The former case can be explained by the MEN dataset containing only words that appear as image labels in the ESP-Game and MIRFLICKR-1M image datasets BIBREF24 , and therefore having data that is more closely distributed to SNLI than to MultiNLI.",
"More notably, in the RareWords dataset BIBREF25 , the word only, concat, and scalar gate methods performed equally, despite having been trained in different datasets ( INLINEFORM0 ), and the char only method performed significantly worse when trained in MultiNLI. The vector gate, however, performed significantly better than its counterpart trained in SNLI. These facts provide evidence that this method is capable of capturing linguistic phenomena that the other methods are unable to model.",
"table:word-similarity-dataset lists the word-similarity datasets and their corresponding reference. As mentioned in subsec:datasets, all the word-similarity datasets contain pairs of words annotated with similarity or relatedness scores, although this difference is not always explicit. Below we provide some details for each.",
"MEN contains 3000 annotated word pairs with integer scores ranging from 0 to 50. Words correspond to image labels appearing in the ESP-Game and MIRFLICKR-1M image datasets.",
"MTurk287 contains 287 annotated pairs with scores ranging from 1.0 to 5.0. It was created from words appearing in both DBpedia and in news articles from The New York Times.",
"MTurk771 contains 771 annotated pairs with scores ranging from 1.0 to 5.0, with words having synonymy, holonymy or meronymy relationships sampled from WordNet BIBREF56 .",
"RG contains 65 annotated pairs with scores ranging from 0.0 to 4.0 representing “similarity of meaning”.",
"RW contains 2034 pairs of words annotated with similarity scores in a scale from 0 to 10. The words included in this dataset were obtained from Wikipedia based on their frequency, and later filtered depending on their WordNet synsets, including synonymy, hyperonymy, hyponymy, holonymy and meronymy. This dataset was created with the purpose of testing how well models can represent rare and complex words.",
"SimLex999 contains 999 word pairs annotated with similarity scores ranging from 0 to 10. In this case the authors explicitly considered similarity and not relatedness, addressing the shortcomings of datasets that do not, such as MEN and WS353. Words include nouns, adjectives and verbs.",
"SimVerb3500 contains 3500 verb pairs annotated with similarity scores ranging from 0 to 10. Verbs were obtained from the USF free association database BIBREF66 , and VerbNet BIBREF63 . This dataset was created to address the lack of representativity of verbs in SimLex999, and the fact that, at the time of creation, the best performing models had already surpassed inter-annotator agreement in verb similarity evaluation resources. Like SimLex999, this dataset also explicitly considers similarity as opposed to relatedness.",
"WS353 contains 353 word pairs annotated with similarity scores from 0 to 10.",
"WS353R is a subset of WS353 containing 252 word pairs annotated with relatedness scores. This dataset was created by asking humans to classify each WS353 word pair into one of the following classes: synonyms, antonyms, identical, hyperonym-hyponym, hyponym-hyperonym, holonym-meronym, meronym-holonym, and none-of-the-above. These annotations were later used to group the pairs into: similar pairs (synonyms, antonyms, identical, hyperonym-hyponym, and hyponym-hyperonym), related pairs (holonym-meronym, meronym-holonym, and none-of-the-above with a human similarity score greater than 5), and unrelated pairs (classified as none-of-the-above with a similarity score less than or equal to 5). This dataset is composed by the union of related and unrelated pairs.",
"WS353S is another subset of WS353 containing 203 word pairs annotated with similarity scores. This dataset is composed by the union of similar and unrelated pairs, as described previously."
],
[
"fig:gatingviz shows that for more common words the vector gate mechanism tends to favor only a few dimensions while keeping a low average gating value across dimensions. On the other hand, values are greater and more homogeneous across dimensions in rarer words. Further, fig:freqvsgatevalue shows this mechanism assigns, on average, a greater gating value to less frequent words, confirming the findings by BIBREF11 , and BIBREF12 .",
"In other words, the less frequent the word, the more this mechanism allows the character-level representation to influence the final word representation, as shown by eq:vg. A possible interpretation of this result is that exploiting character information becomes increasingly necessary as word-level representations' quality decrease.",
"Another observable trend in both figures is that gating values tend to be low on average. Indeed, it is possible to see in fig:freqvsgatevalue that the average gating values range from INLINEFORM0 to INLINEFORM1 . This result corroborates the findings by BIBREF11 , stating that setting INLINEFORM2 in eq:scalar-gate, was better than setting it to higher values.",
"In summary, the gating mechanisms learn how to compensate the lack of expressivity of underrepresented words by selectively combining their representations with those of characters."
],
[
"table:sentlevelresults shows the impact that different methods for combining character and word-level word representations have in the quality of the sentence representations produced by our models.",
"We can observe the same trend mentioned in subsec:word-similarity-eval, and highlighted by the difference between bold values, that models trained in MultiNLI performed better than those trained in SNLI at a statistically significant level, confirming the findings of BIBREF13 . In other words, training sentence encoders on MultiNLI yields more general sentence representations than doing so on SNLI.",
"The two exceptions to the previous trend, SICKE and SICKR, benefited more from models trained on SNLI. We hypothesize this is again due to both SNLI and SICK BIBREF26 having similar data distributions.",
"Additionally, there was no method that significantly outperformed the word only baseline in classification tasks. This means that the added expressivity offered by explicitly modeling characters, be it through concatenation or gating, was not significantly better than simply fine-tuning the pre-trained GloVe embeddings for this type of task. We hypothesize this is due to the conflation of two effects. First, the fact that morphological processes might not encode important information for solving these tasks; and second, that SNLI and MultiNLI belong to domains that are too dissimilar to the domains in which the sentence representations are being tested.",
"On the other hand, the vector gate significantly outperformed every other method in the STSB task when trained in both datasets, and in the STS16 task when trained in SNLI. This again hints at this method being capable of modeling phenomena at the word level, resulting in improved semantic representations at the sentence level."
],
[
"It is clear that the better performance the vector gate had in word similarity tasks did not translate into overall better performance in downstream tasks. This confirms previous findings indicating that intrinsic word evaluation metrics are not good predictors of downstream performance BIBREF29 , BIBREF30 , BIBREF20 , BIBREF31 .",
"subfig:mnli-correlations shows that the word representations created by the vector gate trained in MultiNLI had positively-correlated results within several word-similarity tasks. This hints at the generality of the word representations created by this method when modeling similarity and relatedness.",
"However, the same cannot be said about sentence-level evaluation performance; there is no clear correlation between word similarity tasks and sentence-evaluation tasks. This is clearly illustrated by performance in the STSBenchmark, the only in which the vector gate was significantly superior, not being correlated with performance in any word-similarity dataset. This can be interpreted simply as word-level representations capturing word-similarity not being a sufficient condition for good performance in sentence-level tasks.",
"In general, fig:correlations shows that there are no general correlation effects spanning both training datasets and combination mechanisms. For example, subfig:snli-correlations shows that, for both word-only and concat models trained in SNLI, performance in word similarity tasks correlates positively with performance in most sentence evaluation tasks, however, this does not happen as clearly for the same models trained in MultiNLI (subfig:mnli-correlations)."
],
[
"To the best of our knowledge, there are only two recent works that specifically study how to combine word and subword-level vector representations.",
" BIBREF11 propose to use a trainable scalar gating mechanism capable of learning a weighting scheme for combining character-level and word-level representations. They compared their proposed method to manually weighting both levels; using characters only; words only; or their concatenation. They found that in some datasets a specific manual weighting scheme performed better, while in others the learned scalar gate did.",
" BIBREF12 further expand the gating concept by making the mechanism work at a finer-grained level, learning how to weight each vector's dimensions independently, conditioned on external word-level features such as part-of-speech and named-entity tags. Similarly, they compared their proposed mechanism to using words only, characters only, and a concatenation of both, with and without external features. They found that their vector gate performed better than the other methods in all the reported tasks, and beat the state of the art in two reading comprehension tasks.",
"Both works showed that the gating mechanisms assigned greater importance to character-level representations in rare words, and to word-level representations in common ones, reaffirming the previous findings that subword structures in general, and characters in particular, are beneficial for modeling uncommon words."
],
[
"The problem of representing sentences as fixed-length vectors has been widely studied.",
" BIBREF32 suggested a self-adaptive hierarchical model that gradually composes words into intermediate phrase representations, and adaptively selects specific hierarchical levels for specific tasks. BIBREF33 proposed an encoder-decoder model trained by attempting to reconstruct the surrounding sentences of an encoded passage, in a fashion similar to Skip-gram BIBREF34 . BIBREF35 overcame the previous model's need for ordered training sentences by using autoencoders for creating the sentence representations. BIBREF36 implemented a model simpler and faster to train than the previous two, while having competitive performance. Similar to BIBREF33 , BIBREF37 suggested predicting future sentences with a hierarchical CNN-LSTM encoder.",
" BIBREF13 trained several sentence encoding architectures on a combination of the SNLI and MultiNLI datasets, and showed that a BiLSTM with max-pooling was the best at producing highly transferable sentence representations. More recently, BIBREF18 empirically showed that sentence representations created in a multi-task setting BIBREF38 , performed increasingly better the more tasks they were trained in. BIBREF39 proposed using an autoencoder that relies on multi-head self-attention over the concatenation of the max and mean pooled encoder outputs for producing sentence representations. Finally, BIBREF40 show that modern sentence embedding methods are not vastly superior to random methods.",
"The works mentioned so far usually evaluate the quality of the produced sentence representations in sentence-level downstream tasks. Common benchmarks grouping these kind of tasks include SentEval BIBREF23 , and GLUE BIBREF41 . Another trend, however, is to probe sentence representations to understand what linguistic phenomena they encode BIBREF42 , BIBREF43 , BIBREF44 , BIBREF45 , BIBREF46 ."
],
[
" BIBREF47 provide a review on feature-wise transformation methods, of which the mechanisms presented in this paper form a part of. In a few words, the INLINEFORM0 parameter, in both scalar gate and vector gate mechanisms, can be understood as a scaling parameter limited to the INLINEFORM1 range and conditioned on word representations, whereas adding the scaled INLINEFORM2 and INLINEFORM3 representations can be seen as biasing word representations conditioned on character representations.",
"The previous review extends the work by BIBREF48 , which describes the Feature-wise Linear Modulation (FiLM) framework as a generalization of Conditional Normalization methods, and apply it in visual reasoning tasks. Some of the reported findings are that, in general, scaling has greater impact than biasing, and that in a setting similar to the scalar gate, limiting the scaling parameter to INLINEFORM0 hurt performance. Future decisions involving the design of mechanisms for combining character and word-level representations should be informed by these insights."
],
[
"We presented an empirical study showing the effect that different ways of combining character and word representations has in word-level and sentence-level evaluation tasks.",
"We showed that a vector gate performed consistently better across a variety of word similarity and relatedness tasks. Additionally, despite showing inconsistent results in sentence evaluation tasks, it performed significantly better than the other methods in semantic similarity tasks.",
"We further showed through this mechanism, that learning character-level representations is always beneficial, and becomes increasingly so with less common words.",
"In the future it would be interesting to study how the choice of mechanism for combining subword and word representations affects the more recent language-model-based pretraining methods such as ELMo BIBREF49 , GPT BIBREF50 , BIBREF51 and BERT BIBREF52 ."
],
[
"Thanks to Edison Marrese-Taylor and Pablo Loyola for their feedback on early versions of this manuscript. We also gratefully acknowledge the support of the NVIDIA Corporation with the donation of one of the GPUs used for this research. Jorge A. Balazs is partially supported by the Japanese Government MEXT Scholarship."
],
[
"We only considered words that appear at least twice, for each dataset. Those that appeared only once were considered UNK. We used the Treebank Word Tokenizer as implemented in NLTK for tokenizing the training and development datasets.",
"In the same fashion as conneau2017supervised, we used a batch size of 64, an SGD optmizer with an initial learning rate of INLINEFORM0 , and at each epoch divided the learning rate by 5 if the validation accuracy decreased. We also used gradient clipping when gradients where INLINEFORM1 .",
"We defined character vector representations as 50-dimensional vectors randomly initialized by sampling from the uniform distribution in the INLINEFORM0 range.",
"The output dimension of the character-level BiLSTM was 300 per direction, and remained of such size after combining forward and backward representations as depicted in eq. EQREF9 .",
"Word vector representations where initialized from the 300-dimensional GloVe vectors BIBREF14 , trained in 840B tokens from the Common Crawl, and finetuned during training. Words not present in the GloVe vocabulary where randomly initialized by sampling from the uniform distribution in the INLINEFORM0 range.",
"The input size of the word-level LSTM was 300 for every method except concat in which it was 600, and its output was always 2048 per direction, resulting in a 4096-dimensional sentence representation."
],
[
"table:sentence-eval-datasets lists the sentence-level evaluation datasets used in this paper. The provided URLs correspond to the original sources, and not necessarily to the URLs where SentEval got the data from.",
"The version of the CR, MPQA, MR, and SUBJ datasets used in this paper were the ones preprocessed by BIBREF75 . Both SST2 and SST5 correspond to preprocessed versions of the SST dataset by BIBREF74 . SST2 corresponds to a subset of SST used by BIBREF54 containing flat representations of sentences annotated with binary sentiment labels, and SST5 to another subset annotated with more fine-grained sentiment labels (very negative, negative, neutral, positive, very positive)."
]
]
} | {
"question": [
"Where do they employ feature-wise sigmoid gating?",
"Which model architecture do they use to obtain representations?",
"Which downstream sentence-level tasks do they evaluate on?",
"Which similarity datasets do they use?"
],
"question_id": [
"7fe48939ce341212c1d801095517dc552b98e7b3",
"65ad17f614b7345f0077424c04c94971c831585b",
"323e100a6c92d3fe503f7a93b96d821408f92109",
"9f89bff89cea722debc991363f0826de945bc582"
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"gating mechanism acts upon each dimension of the word and character-level vectors"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The vector gate is inspired by BIBREF11 and BIBREF12 , but is different to the former in that the gating mechanism acts upon each dimension of the word and character-level vectors, and different to the latter in that it does not rely on external sources of information for calculating the gating mechanism."
],
"highlighted_evidence": [
"The vector gate is inspired by BIBREF11 and BIBREF12 , but is different to the former in that the gating mechanism acts upon each dimension of the word and character-level vectors, and different to the latter in that it does not rely on external sources of information for calculating the gating mechanism."
]
}
],
"annotation_id": [
"d92555e3c0f24a34117b95e5e520a55ad588eb8e"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"BiLSTM with max pooling"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"To enable sentence-level classification we need to obtain a sentence representation from the word vectors INLINEFORM0 . We achieved this by using a BiLSTM with max pooling, which was shown to be a good universal sentence encoding mechanism BIBREF13 ."
],
"highlighted_evidence": [
"To enable sentence-level classification we need to obtain a sentence representation from the word vectors INLINEFORM0 . We achieved this by using a BiLSTM with max pooling, which was shown to be a good universal sentence encoding mechanism BIBREF13 ."
]
}
],
"annotation_id": [
"ba4bf35bc0b271ac2b86145e2af2f4c6909a14eb"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"BIBREF13 , BIBREF18"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Finally, we fed these obtained word vectors to a BiLSTM with max-pooling and evaluated the final sentence representations in 11 downstream transfer tasks BIBREF13 , BIBREF18 .",
"table:sentence-eval-datasets lists the sentence-level evaluation datasets used in this paper. The provided URLs correspond to the original sources, and not necessarily to the URLs where SentEval got the data from.",
"FLOAT SELECTED: Table B.2: Sentence representation evaluation datasets. SST5 was obtained from a GitHub repository with no associated peer-reviewed work."
],
"highlighted_evidence": [
"Finally, we fed these obtained word vectors to a BiLSTM with max-pooling and evaluated the final sentence representations in 11 downstream transfer tasks BIBREF13 , BIBREF18 .",
"table:sentence-eval-datasets lists the sentence-level evaluation datasets used in this paper.",
"FLOAT SELECTED: Table B.2: Sentence representation evaluation datasets. SST5 was obtained from a GitHub repository with no associated peer-reviewed work."
]
}
],
"annotation_id": [
"9e334521a65870cfbb94461e1ec21ab0edfd0947"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"MEN",
"MTurk287",
"MTurk771",
"RG",
"RW",
"SimLex999",
"SimVerb3500",
"WS353",
"WS353R",
"WS353S"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"table:word-similarity-dataset lists the word-similarity datasets and their corresponding reference. As mentioned in subsec:datasets, all the word-similarity datasets contain pairs of words annotated with similarity or relatedness scores, although this difference is not always explicit. Below we provide some details for each.",
"MEN contains 3000 annotated word pairs with integer scores ranging from 0 to 50. Words correspond to image labels appearing in the ESP-Game and MIRFLICKR-1M image datasets.",
"MTurk287 contains 287 annotated pairs with scores ranging from 1.0 to 5.0. It was created from words appearing in both DBpedia and in news articles from The New York Times.",
"MTurk771 contains 771 annotated pairs with scores ranging from 1.0 to 5.0, with words having synonymy, holonymy or meronymy relationships sampled from WordNet BIBREF56 .",
"RG contains 65 annotated pairs with scores ranging from 0.0 to 4.0 representing “similarity of meaning”.",
"RW contains 2034 pairs of words annotated with similarity scores in a scale from 0 to 10. The words included in this dataset were obtained from Wikipedia based on their frequency, and later filtered depending on their WordNet synsets, including synonymy, hyperonymy, hyponymy, holonymy and meronymy. This dataset was created with the purpose of testing how well models can represent rare and complex words.",
"SimLex999 contains 999 word pairs annotated with similarity scores ranging from 0 to 10. In this case the authors explicitly considered similarity and not relatedness, addressing the shortcomings of datasets that do not, such as MEN and WS353. Words include nouns, adjectives and verbs.",
"SimVerb3500 contains 3500 verb pairs annotated with similarity scores ranging from 0 to 10. Verbs were obtained from the USF free association database BIBREF66 , and VerbNet BIBREF63 . This dataset was created to address the lack of representativity of verbs in SimLex999, and the fact that, at the time of creation, the best performing models had already surpassed inter-annotator agreement in verb similarity evaluation resources. Like SimLex999, this dataset also explicitly considers similarity as opposed to relatedness.",
"WS353 contains 353 word pairs annotated with similarity scores from 0 to 10.",
"WS353R is a subset of WS353 containing 252 word pairs annotated with relatedness scores. This dataset was created by asking humans to classify each WS353 word pair into one of the following classes: synonyms, antonyms, identical, hyperonym-hyponym, hyponym-hyperonym, holonym-meronym, meronym-holonym, and none-of-the-above. These annotations were later used to group the pairs into: similar pairs (synonyms, antonyms, identical, hyperonym-hyponym, and hyponym-hyperonym), related pairs (holonym-meronym, meronym-holonym, and none-of-the-above with a human similarity score greater than 5), and unrelated pairs (classified as none-of-the-above with a similarity score less than or equal to 5). This dataset is composed by the union of related and unrelated pairs.",
"WS353S is another subset of WS353 containing 203 word pairs annotated with similarity scores. This dataset is composed by the union of similar and unrelated pairs, as described previously."
],
"highlighted_evidence": [
"As mentioned in subsec:datasets, all the word-similarity datasets contain pairs of words annotated with similarity or relatedness scores, although this difference is not always explicit. Below we provide some details for each.\n\nMEN contains 3000 annotated word pairs with integer scores ranging from 0 to 50. Words correspond to image labels appearing in the ESP-Game and MIRFLICKR-1M image datasets.\n\nMTurk287 contains 287 annotated pairs with scores ranging from 1.0 to 5.0. It was created from words appearing in both DBpedia and in news articles from The New York Times.\n\nMTurk771 contains 771 annotated pairs with scores ranging from 1.0 to 5.0, with words having synonymy, holonymy or meronymy relationships sampled from WordNet BIBREF56 .\n\nRG contains 65 annotated pairs with scores ranging from 0.0 to 4.0 representing “similarity of meaning”.\n\nRW contains 2034 pairs of words annotated with similarity scores in a scale from 0 to 10. The words included in this dataset were obtained from Wikipedia based on their frequency, and later filtered depending on their WordNet synsets, including synonymy, hyperonymy, hyponymy, holonymy and meronymy. This dataset was created with the purpose of testing how well models can represent rare and complex words.\n\nSimLex999 contains 999 word pairs annotated with similarity scores ranging from 0 to 10. In this case the authors explicitly considered similarity and not relatedness, addressing the shortcomings of datasets that do not, such as MEN and WS353. Words include nouns, adjectives and verbs.\n\nSimVerb3500 contains 3500 verb pairs annotated with similarity scores ranging from 0 to 10. Verbs were obtained from the USF free association database BIBREF66 , and VerbNet BIBREF63 . This dataset was created to address the lack of representativity of verbs in SimLex999, and the fact that, at the time of creation, the best performing models had already surpassed inter-annotator agreement in verb similarity evaluation resources. Like SimLex999, this dataset also explicitly considers similarity as opposed to relatedness.\n\nWS353 contains 353 word pairs annotated with similarity scores from 0 to 10.\n\nWS353R is a subset of WS353 containing 252 word pairs annotated with relatedness scores. This dataset was created by asking humans to classify each WS353 word pair into one of the following classes: synonyms, antonyms, identical, hyperonym-hyponym, hyponym-hyperonym, holonym-meronym, meronym-holonym, and none-of-the-above. These annotations were later used to group the pairs into: similar pairs (synonyms, antonyms, identical, hyperonym-hyponym, and hyponym-hyperonym), related pairs (holonym-meronym, meronym-holonym, and none-of-the-above with a human similarity score greater than 5), and unrelated pairs (classified as none-of-the-above with a similarity score less than or equal to 5). This dataset is composed by the union of related and unrelated pairs.\n\nWS353S is another subset of WS353 containing 203 word pairs annotated with similarity scores. This dataset is composed by the union of similar and unrelated pairs, as described previously."
]
},
{
"unanswerable": false,
"extractive_spans": [
"WS353S",
"SimLex999",
"SimVerb3500"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"To face the previous problem, we tested our methods in a wide variety of datasets, including some that explicitly model relatedness (WS353R), some that explicitly consider similarity (WS353S, SimLex999, SimVerb3500), and some where the distinction is not clear (MEN, MTurk287, MTurk771, RG, WS353). We also included the RareWords (RW) dataset for evaluating the quality of rare word representations. See appendix:datasets for a more complete description of the datasets we used."
],
"highlighted_evidence": [
"To face the previous problem, we tested our methods in a wide variety of datasets, including some that explicitly model relatedness (WS353R), some that explicitly consider similarity (WS353S, SimLex999, SimVerb3500), and some where the distinction is not clear (MEN, MTurk287, MTurk771, RG, WS353). "
]
}
],
"annotation_id": [
"366af11c69842d7ae0138fbb041401747f4dd933",
"87a1a9eaffdded9d71c7eb62194a18bd78194647"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
} | {
"caption": [
"Figure 1: Character and Word-level combination methods.",
"Table 1: Word-level evaluation results. Each value corresponds to average Pearson correlation of 7 identical models initialized with different random seeds. Correlations were scaled to the [−100; 100] range for easier reading. Bold values represent the best method per training dataset, per task; underlined values represent the best-performing method per task, independent of training dataset. For each task and dataset, every best-performing method was significantly different to other methods (p < 0.05), except for w trained in SNLI at the MTurk287 task. Statistical significance was obtained with a two-sided Welch’s t-test for two independent samples without assuming equal variance (Welch, 1947).",
"Figure 2: Visualization of gating values for 5 common words (freq. ∼ 20000), 5 uncommon words (freq. ∼ 60), and 5 rare words (freq. ∼ 2), appearing in both the RW and MultiNLI datasets.",
"Figure 3: Average gating values for words appearing in both RW and MultiNLI. Words are sorted by decreasing frequency in MultiNLI.",
"Table 2: Experimental results. Each value shown in the table is the average result of 7 identical models initialized with different random seeds. Values represent accuracy (%) unless indicated by †, in which case they represent Pearson correlation scaled to the range [−100, 100] for easier reading. Bold values represent the best method per training dataset, per task; underlined values represent the best-performing method per task, independent of training dataset. Values marked with an asterisk (∗) are significantly different to the average performance of the best model trained on the same dataset (p < 0.05). Results for every best-performing method trained on one dataset are significantly different to the best-performing method trained on the other. Statistical significance was obtained in the same way as described in table 1.",
"Figure 4: Spearman correlation between performances in word and sentence level evaluation tasks.",
"Table B.1: Word similarity and relatedness datasets.",
"Table B.2: Sentence representation evaluation datasets. SST5 was obtained from a GitHub repository with no associated peer-reviewed work."
],
"file": [
"3-Figure1-1.png",
"5-Table1-1.png",
"6-Figure2-1.png",
"6-Figure3-1.png",
"7-Table2-1.png",
"8-Figure4-1.png",
"14-TableB.1-1.png",
"15-TableB.2-1.png"
]
} |
1911.09886 | Effective Modeling of Encoder-Decoder Architecture for Joint Entity and Relation Extraction | A relation tuple consists of two entities and the relation between them, and often such tuples are found in unstructured text. There may be multiple relation tuples present in a text and they may share one or both entities among them. Extracting such relation tuples from a sentence is a difficult task and sharing of entities or overlapping entities among the tuples makes it more challenging. Most prior work adopted a pipeline approach where entities were identified first followed by finding the relations among them, thus missing the interaction among the relation tuples in a sentence. In this paper, we propose two approaches to use encoder-decoder architecture for jointly extracting entities and relations. In the first approach, we propose a representation scheme for relation tuples which enables the decoder to generate one word at a time like machine translation models and still finds all the tuples present in a sentence with full entity names of different length and with overlapping entities. Next, we propose a pointer network-based decoding approach where an entire tuple is generated at every time step. Experiments on the publicly available New York Times corpus show that our proposed approaches outperform previous work and achieve significantly higher F1 scores. | {
"section_name": [
"Introduction",
"Task Description",
"Encoder-Decoder Architecture",
"Encoder-Decoder Architecture ::: Embedding Layer & Encoder",
"Encoder-Decoder Architecture ::: Word-level Decoder & Copy Mechanism",
"Encoder-Decoder Architecture ::: Pointer Network-Based Decoder",
"Encoder-Decoder Architecture ::: Relation Tuple Extraction",
"Encoder-Decoder Architecture ::: Attention Modeling",
"Encoder-Decoder Architecture ::: Loss Function",
"Experiments ::: Datasets",
"Experiments ::: Parameter Settings",
"Experiments ::: Baselines and Evaluation Metrics",
"Experiments ::: Experimental Results",
"Analysis and Discussion ::: Ablation Studies",
"Analysis and Discussion ::: Performance Analysis",
"Analysis and Discussion ::: Error Analysis",
"Related Work",
"Conclusion",
"Acknowledgments"
],
"paragraphs": [
[
"Distantly-supervised information extraction systems extract relation tuples with a set of pre-defined relations from text. Traditionally, researchers BIBREF0, BIBREF1, BIBREF2 use pipeline approaches where a named entity recognition (NER) system is used to identify the entities in a sentence and then a classifier is used to find the relation (or no relation) between them. However, due to the complete separation of entity detection and relation classification, these models miss the interaction between multiple relation tuples present in a sentence.",
"Recently, several neural network-based models BIBREF3, BIBREF4 were proposed to jointly extract entities and relations from a sentence. These models used a parameter-sharing mechanism to extract the entities and relations in the same network. But they still find the relations after identifying all the entities and do not fully capture the interaction among multiple tuples. BIBREF5 (BIBREF5) proposed a joint extraction model based on neural sequence tagging scheme. But their model could not extract tuples with overlapping entities in a sentence as it could not assign more than one tag to a word. BIBREF6 (BIBREF6) proposed a neural encoder-decoder model for extracting relation tuples with overlapping entities. However, they used a copy mechanism to copy only the last token of the entities, thus this model could not extract the full entity names. Also, their best performing model used a separate decoder to extract each tuple which limited the power of their model. This model was trained with a fixed number of decoders and could not extract tuples beyond that number during inference. Encoder-decoder models are powerful models and they are successful in many NLP tasks such as machine translation, sentence generation from structured data, and open information extraction.",
"In this paper, we explore how encoder-decoder models can be used effectively for extracting relation tuples from sentences. There are three major challenges in this task: (i) The model should be able to extract entities and relations together. (ii) It should be able to extract multiple tuples with overlapping entities. (iii) It should be able to extract exactly two entities of a tuple with their full names. To address these challenges, we propose two novel approaches using encoder-decoder architecture. We first propose a new representation scheme for relation tuples (Table TABREF1) such that it can represent multiple tuples with overlapping entities and different lengths of entities in a simple way. We employ an encoder-decoder model where the decoder extracts one word at a time like machine translation models. At the end of sequence generation, due to the unique representation of the tuples, we can extract the tuples from the sequence of words. Although this model performs quite well, generating one word at a time is somewhat unnatural for this task. Each tuple has exactly two entities and one relation, and each entity appears as a continuous text span in a sentence. The most effective way to identify them is to find their start and end location in the sentence. Each relation tuple can then be represented using five items: start and end location of the two entities and the relation between them (see Table TABREF1). Keeping this in mind, we propose a pointer network-based decoding framework. This decoder consists of two pointer networks which find the start and end location of the two entities in a sentence, and a classification network which identifies the relation between them. At every time step of the decoding, this decoder extracts an entire relation tuple, not just a word. Experiments on the New York Times (NYT) datasets show that our approaches work effectively for this task and achieve state-of-the-art performance. To summarize, the contributions of this paper are as follows:",
"(1) We propose a new representation scheme for relation tuples such that an encoder-decoder model, which extracts one word at each time step, can still find multiple tuples with overlapping entities and tuples with multi-token entities from sentences. We also propose a masking-based copy mechanism to extract the entities from the source sentence only.",
"(2) We propose a modification in the decoding framework with pointer networks to make the encoder-decoder model more suitable for this task. At every time step, this decoder extracts an entire relation tuple, not just a word. This new decoding framework helps in speeding up the training process and uses less resources (GPU memory). This will be an important factor when we move from sentence-level tuple extraction to document-level extraction.",
"(3) Experiments on the NYT datasets show that our approaches outperform all the previous state-of-the-art models significantly and set a new benchmark on these datasets."
],
[
"A relation tuple consists of two entities and a relation. Such tuples can be found in sentences where an entity is a text span in a sentence and a relation comes from a pre-defined set $R$. These tuples may share one or both entities among them. Based on this, we divide the sentences into three classes: (i) No Entity Overlap (NEO): A sentence in this class has one or more tuples, but they do not share any entities. (ii) Entity Pair Overlap (EPO): A sentence in this class has more than one tuple, and at least two tuples share both the entities in the same or reverse order. (iii) Single Entity Overlap (SEO): A sentence in this class has more than one tuple and at least two tuples share exactly one entity. It should be noted that a sentence can belong to both EPO and SEO classes. Our task is to extract all relation tuples present in a sentence."
],
[
"In this task, input to the system is a sequence of words, and output is a set of relation tuples. In our first approach, we represent each tuple as entity1 ; entity2 ; relation. We use `;' as a separator token to separate the tuple components. Multiple tuples are separated using the `$\\vert $' token. We have included one example of such representation in Table TABREF1. Multiple relation tuples with overlapping entities and different lengths of entities can be represented in a simple way using these special tokens (; and $\\vert $). During inference, after the end of sequence generation, relation tuples can be extracted easily using these special tokens. Due to this uniform representation scheme, where entity tokens, relation tokens, and special tokens are treated similarly, we use a shared vocabulary between the encoder and decoder which includes all of these tokens. The input sentence contains clue words for every relation which can help generate the relation tokens. We use two special tokens so that the model can distinguish between the beginning of a relation tuple and the beginning of a tuple component. To extract the relation tuples from a sentence using the encoder-decoder model, the model has to generate the entity tokens, find relation clue words and map them to the relation tokens, and generate the special tokens at appropriate time. Our experiments show that the encoder-decoder models can achieve this quite effectively."
],
[
"We create a single vocabulary $V$ consisting of the source sentence tokens, relation names from relation set $R$, special separator tokens (`;', `$\\vert $'), start-of-target-sequence token (SOS), end-of-target-sequence token (EOS), and unknown word token (UNK). Word-level embeddings are formed by two components: (1) pre-trained word vectors (2) character embedding-based feature vectors. We use a word embedding layer $\\mathbf {E}_w \\in \\mathbb {R}^{\\vert V \\vert \\times d_w}$ and a character embedding layer $\\mathbf {E}_c \\in \\mathbb {R}^{\\vert A \\vert \\times d_c}$, where $d_w$ is the dimension of word vectors, $A$ is the character alphabet of input sentence tokens, and $d_c$ is the dimension of character embedding vectors. Following BIBREF7 (BIBREF7), we use a convolutional neural network with max-pooling to extract a feature vector of size $d_f$ for every word. Word embeddings and character embedding-based feature vectors are concatenated ($\\Vert $) to obtain the representation of the input tokens.",
"A source sentence $\\mathbf {S}$ is represented by vectors of its tokens $\\mathbf {x}_1, \\mathbf {x}_2,....,\\mathbf {x}_n$, where $\\mathbf {x}_i \\in \\mathbb {R}^{(d_w+d_f)}$ is the vector representation of the $i$th word and $n$ is the length of $\\mathbf {S}$. These vectors $\\mathbf {x}_i$ are passed to a bi-directional LSTM BIBREF8 (Bi-LSTM) to obtain the hidden representation $\\mathbf {h}_i^E$. We set the hidden dimension of the forward and backward LSTM of the Bi-LSTM to be $d_h/2$ to obtain $\\mathbf {h}_i^E \\in \\mathbb {R}^{d_h}$, where $d_h$ is the hidden dimension of the sequence generator LSTM of the decoder described below."
],
[
"A target sequence $\\mathbf {T}$ is represented by only word embedding vectors of its tokens $\\mathbf {y}_0, \\mathbf {y}_1,....,\\mathbf {y}_m$ where $\\mathbf {y}_i \\in \\mathbb {R}^{d_w}$ is the embedding vector of the $i$th token and $m$ is the length of the target sequence. $\\mathbf {y}_0$ and $\\mathbf {y}_m$ represent the embedding vector of the SOS and EOS token respectively. The decoder generates one token at a time and stops when EOS is generated. We use an LSTM as the decoder and at time step $t$, the decoder takes the source sentence encoding ($\\mathbf {e}_t \\in \\mathbb {R}^{d_h}$) and the previous target word embedding ($\\mathbf {y}_{t-1}$) as the input and generates the hidden representation of the current token ($\\mathbf {h}_t^D \\in \\mathbb {R}^{d_h}$). The sentence encoding vector $\\mathbf {e}_t$ can be obtained using attention mechanism. $\\mathbf {h}_t^D$ is projected to the vocabulary $V$ using a linear layer with weight matrix $\\mathbf {W}_v \\in \\mathbb {R}^{\\vert V \\vert \\times d_h}$ and bias vector $\\mathbf {b}_v \\in \\mathbb {R}^{\\vert V \\vert }$ (projection layer).",
"$\\mathbf {o}_t$ represents the normalized scores of all the words in the embedding vocabulary at time step $t$. $\\mathbf {h}_{t-1}^D$ is the previous hidden state of the LSTM.",
"The projection layer of the decoder maps the decoder output to the entire vocabulary. During training, we use the gold label target tokens directly. However, during inference, the decoder may predict a token from the vocabulary which is not present in the current sentence or the set of relations or the special tokens. To prevent this, we use a masking technique while applying the softmax operation at the projection layer. We mask (exclude) all words of the vocabulary except the current source sentence tokens, relation tokens, separator tokens (`;', `$\\vert $'), UNK, and EOS tokens in the softmax operation. To mask (exclude) some word from softmax, we set the corresponding value in $\\hat{\\mathbf {o}}_t$ at $-\\infty $ and the corresponding softmax score will be zero. This ensures the copying of entities from the source sentence only. We include the UNK token in the softmax operation to make sure that the model generates new entities during inference. If the decoder predicts an UNK token, we replace it with the corresponding source word which has the highest attention score. During inference, after decoding is finished, we extract all tuples based on the special tokens, remove duplicate tuples and tuples in which both entities are the same or tuples where the relation token is not from the relation set. This model is referred to as WordDecoding (WDec) henceforth."
],
[
"In the second approach, we identify the entities in the sentence using their start and end locations. We remove the special tokens and relation names from the word vocabulary and word embeddings are used only at the encoder side along with character embeddings. We use an additional relation embedding matrix $\\mathbf {E}_r \\in \\mathbb {R}^{\\vert R \\vert \\times d_r}$ at the decoder side of our model, where $R$ is the set of relations and $d_r$ is the dimension of relation vectors. The relation set $R$ includes a special relation token EOS which indicates the end of the sequence. Relation tuples are represented as a sequence $T=y_0, y_1,....,y_m$, where $y_t$ is a tuple consisting of four indexes in the source sentence indicating the start and end location of the two entities and a relation between them (see Table TABREF1). $y_0$ is a dummy tuple that represents the start tuple of the sequence and $y_m$ functions as the end tuple of the sequence which has EOS as the relation (entities are ignored for this tuple). The decoder consists of an LSTM with hidden dimension $d_h$ to generate the sequence of tuples, two pointer networks to find the two entities, and a classification network to find the relation of a tuple. At time step $t$, the decoder takes the source sentence encoding ($\\mathbf {e}_t \\in \\mathbb {R}^{d_h}$) and the representation of all previously generated tuples ($\\mathbf {y}_{prev}=\\sum _{j=0}^{t-1}\\mathbf {y}_{j}$) as the input and generates the hidden representation of the current tuple, $\\mathbf {h}_t^D \\in \\mathbb {R}^{d_h}$. The sentence encoding vector $\\mathbf {e}_t$ is obtained using an attention mechanism as explained later. Relation tuples are a set and to prevent the decoder from generating the same tuple again, we pass the information about all previously generated tuples at each time step of decoding. $\\mathbf {y}_j$ is the vector representation of the tuple predicted at time step $j < t$ and we use the zero vector ($\\mathbf {y}_0=\\overrightarrow{0}$) to represent the dummy tuple $y_0$. $\\mathbf {h}_{t-1}^D$ is the hidden state of the LSTM at time step $t-1$."
],
[
"After obtaining the hidden representation of the current tuple $\\mathbf {h}_t^D$, we first find the start and end pointers of the two entities in the source sentence. We concatenate the vector $\\mathbf {h}_t^D$ with the hidden vectors $\\mathbf {h}_i^E$ of the encoder and pass them to a Bi-LSTM layer with hidden dimension $d_p$ for forward and backward LSTM. The hidden vectors of this Bi-LSTM layer $\\mathbf {h}_i^k \\in \\mathbb {R}^{2d_p}$ are passed to two feed-forward networks (FFN) with softmax to convert each hidden vector into two scalar values between 0 and 1. Softmax operation is applied across all the words in the input sentence. These two scalar values represent the probability of the corresponding source sentence token to be the start and end location of the first entity. This Bi-LSTM layer with the two feed-forward layers is the first pointer network which identifies the first entity of the current relation tuple.",
"where $\\mathbf {W}_s^1 \\in \\mathbb {R}^{1 \\times 2d_p}$, $\\mathbf {W}_e^1 \\in \\mathbb {R}^{1 \\times 2d_p}$, ${b}_s^1$, and ${b}_e^1$ are the weights and bias parameters of the feed-forward layers. ${s}_i^1$, ${e}_i^1$ represent the normalized probabilities of the $i$th source word being the start and end token of the first entity of the predicted tuple. We use another pointer network to extract the second entity of the tuple. We concatenate the hidden vectors $\\mathbf {h}_i^k$ with $\\mathbf {h}_t^D$ and $\\mathbf {h}_i^E$ and pass them to the second pointer network to obtain ${s}_i^2$ and ${e}_i^2$, which represent the normalized probabilities of the $i$th source word being the start and end of the second entity. These normalized probabilities are used to find the vector representation of the two entities, $\\mathbf {a}_t^1$ and $\\mathbf {a}_t^2$.",
"We concatenate the entity vector representations $\\mathbf {a}_t^1$ and $\\mathbf {a}_t^2$ with $\\mathbf {h}_t^D$ and pass it to a feed-forward network (FFN) with softmax to find the relation. This feed-forward layer has a weight matrix $\\mathbf {W}_r \\in \\mathbb {R}^{\\vert R \\vert \\times (8d_p + d_h)}$ and a bias vector $\\mathbf {b}_r \\in \\mathbb {R}^{\\vert R \\vert }$.",
"$\\mathbf {r}_t$ represents the normalized probabilities of the relation at time step $t$. The relation embedding vector $\\mathbf {z}_t$ is obtained using $\\mathrm {argmax}$ of $\\mathbf {r}_t$ and $\\mathbf {E}_r$. $\\mathbf {y}_t \\in \\mathbb {R}^{(8d_p + d_r)}$ is the vector representation of the tuple predicted at time step $t$. During training, we pass the embedding vector of the gold label relation in place of the predicted relation. So the $\\mathrm {argmax}$ function does not affect the back-propagation during training. The decoder stops the sequence generation process when the predicted relation is EOS. This is the classification network of the decoder.",
"During inference, we select the start and end location of the two entities such that the product of the four pointer probabilities is maximized keeping the constraints that the two entities do not overlap with each other and $1 \\le b \\le e \\le n$ where $b$ and $e$ are the start and end location of the corresponding entities. We first choose the start and end location of entity 1 based on the maximum product of the corresponding start and end pointer probabilities. Then we find entity 2 in a similar way excluding the span of entity 1 to avoid overlap. The same procedure is repeated but this time we first find entity 2 followed by entity 1. We choose that pair of entities which gives the higher product of four pointer probabilities between these two choices. This model is referred to as PtrNetDecoding (PNDec) henceforth."
],
[
"We experimented with three different attention mechanisms for our word-level decoding model to obtain the source context vector $\\mathbf {e}_t$:",
"(1) Avg.: The context vector is obtained by averaging the hidden vectors of the encoder: $\\mathbf {e}_t=\\frac{1}{n}\\sum _{i=1}^n \\mathbf {h}_i^E$",
"(2) N-gram: The context vector is obtained by the N-gram attention mechanism of BIBREF9 (BIBREF9) with N=3.",
"$\\textnormal {a}_i^g=(\\mathbf {h}_n^{E})^T \\mathbf {V}^g \\mathbf {w}_i^g$, $\\alpha ^g = \\mathrm {softmax}(\\mathbf {a}^g)$",
"$\\mathbf {e}_t=[\\mathbf {h}_n^E \\Vert \\sum _{g=1}^N \\mathbf {W}^g (\\sum _{i=1}^{\\vert G^g \\vert } \\alpha _i^g \\mathbf {w}_i^g)$]",
"Here, $\\mathbf {h}_n^E$ is the last hidden state of the encoder, $g \\in \\lbrace 1, 2, 3\\rbrace $ refers to the word gram combination, $G^g$ is the sequence of g-gram word representations for the input sentence, $\\mathbf {w}_i^g$ is the $i$th g-gram vector (2-gram and 3-gram representations are obtained by average pooling), $\\alpha _i^g$ is the normalized attention score for the $i$th g-gram vector, $\\mathbf {W} \\in \\mathbb {R}^{d_h \\times d_h}$ and $\\mathbf {V} \\in \\mathbb {R}^{d_h \\times d_h}$ are trainable parameters.",
"(3) Single: The context vector is obtained by the attention mechanism proposed by BIBREF10 (BIBREF10). This attention mechanism gives the best performance with the word-level decoding model.",
"$\\mathbf {u}_t^i = \\mathbf {W}_{u} \\mathbf {h}_i^E, \\quad \\mathbf {q}_t^i = \\mathbf {W}_{q} \\mathbf {h}_{t-1}^D + \\mathbf {b}_{q}$,",
"$\\textnormal {a}_t^i = \\mathbf {v}_a \\tanh (\\mathbf {q}_t^i + \\mathbf {u}_t^i), \\quad \\alpha _t = \\mathrm {softmax}(\\mathbf {a}_t)$,",
"$\\mathbf {e}_t = \\sum _{i=1}^n \\alpha _t^i \\mathbf {h}_i^E$",
"where $\\mathbf {W}_u \\in \\mathbb {R}^{d_h \\times d_h}$, $\\mathbf {W}_q \\in \\mathbb {R}^{d_h \\times d_h}$, and $\\mathbf {v}_a \\in \\mathbb {R}^{d_h}$ are all trainable attention parameters and $\\mathbf {b}_q \\in \\mathbb {R}^{d_h}$ is a bias vector. $\\alpha _t^i$ is the normalized attention score of the $i$th source word at the decoding time step $t$.",
"For our pointer network-based decoding model, we use three variants of the single attention model. First, we use $\\mathbf {h}_{t-1}^D$ to calculate $\\mathbf {q}_t^i$ in the attention mechanism. Next, we use $\\mathbf {y}_{prev}$ to calculate $\\mathbf {q}_t^i$, where $\\mathbf {W}_q \\in \\mathbb {R}^{(8d_p + d_r) \\times d_h}$. In the final variant, we obtain the attentive context vector by concatenating the two attentive vectors obtained using $\\mathbf {h}_{t-1}^D$ and $\\mathbf {y}_{prev}$. This gives the best performance with the pointer network-based decoding model. These variants are referred to as $\\mathrm {dec_{hid}}$, $\\mathrm {tup_{prev}}$, and $\\mathrm {combo}$ in Table TABREF17."
],
[
"We minimize the negative log-likelihood loss of the generated words for word-level decoding ($\\mathcal {L}_{word}$) and minimize the sum of negative log-likelihood loss of relation classification and the four pointer locations for pointer network-based decoding ($\\mathcal {L}_{ptr}$).",
"$v_t^b$ is the softmax score of the target word at time step $t$ for the word-level decoding model. $r$, $s$, and $e$ are the softmax score of the corresponding true relation label, true start and end pointer location of an entity. $b$, $t$, and $c$ refer to the $b$th training instance, $t$th time step of decoding, and the two entities of a tuple respectively. $B$ and $T$ are the batch size and maximum time step of the decoder respectively."
],
[
"We focus on the task of extracting multiple tuples with overlapping entities from sentences. We choose the New York Times (NYT) corpus for our experiments. This corpus has multiple versions, and we choose the following two versions as their test dataset has significantly larger number of instances of multiple relation tuples with overlapping entities. (i) The first version is used by BIBREF6 (BIBREF6) (mentioned as NYT in their paper) and has 24 relations. We name this version as NYT24. (ii) The second version is used by BIBREF11 (BIBREF11) (mentioned as NYT10 in their paper) and has 29 relations. We name this version as NYT29. We select 10% of the original training data and use it as the validation dataset. The remaining 90% is used for training. We include statistics of the training and test datasets in Table TABREF11."
],
[
"We run the Word2Vec BIBREF12 tool on the NYT corpus to initialize the word embeddings. The character embeddings and relation embeddings are initialized randomly. All embeddings are updated during training. We set the word embedding dimension $d_w=300$, relation embedding dimension $d_r=300$, character embedding dimension $d_c=50$, and character-based word feature dimension $d_f=50$. To extract the character-based word feature vector, we set the CNN filter width at 3 and the maximum length of a word at 10. The hidden dimension $d_h$ of the decoder LSTM cell is set at 300 and the hidden dimension of the forward and the backward LSTM of the encoder is set at 150. The hidden dimension of the forward and backward LSTM of the pointer networks is set at $d_p=300$. The model is trained with mini-batch size of 32 and the network parameters are optimized using Adam BIBREF13. Dropout layers with a dropout rate fixed at $0.3$ are used in our network to avoid overfitting."
],
[
"We compare our model with the following state-of-the-art joint entity and relation extraction models:",
"(1) SPTree BIBREF4: This is an end-to-end neural entity and relation extraction model using sequence LSTM and Tree LSTM. Sequence LSTM is used to identify all the entities first and then Tree LSTM is used to find the relation between all pairs of entities.",
"(2) Tagging BIBREF5: This is a neural sequence tagging model which jointly extracts the entities and relations using an LSTM encoder and an LSTM decoder. They used a Cartesian product of entity tags and relation tags to encode the entity and relation information together. This model does not work when tuples have overlapping entities.",
"(3) CopyR BIBREF6: This model uses an encoder-decoder approach for joint extraction of entities and relations. It copies only the last token of an entity from the source sentence. Their best performing multi-decoder model is trained with a fixed number of decoders where each decoder extracts one tuple.",
"(4) HRL BIBREF11: This model uses a reinforcement learning (RL) algorithm with two levels of hierarchy for tuple extraction. A high-level RL finds the relation and a low-level RL identifies the two entities using a sequence tagging approach. This sequence tagging approach cannot always ensure extraction of exactly two entities.",
"(5) GraphR BIBREF14: This model considers each token in a sentence as a node in a graph, and edges connecting the nodes as relations between them. They use graph convolution network (GCN) to predict the relations of every edge and then filter out some of the relations.",
"(6) N-gram Attention BIBREF9: This model uses an encoder-decoder approach with N-gram attention mechanism for knowledge-base completion using distantly supervised data. The encoder uses the source tokens as its vocabulary and the decoder uses the entire Wikidata BIBREF15 entity IDs and relation IDs as its vocabulary. The encoder takes the source sentence as input and the decoder outputs the two entity IDs and relation ID for every tuple. During training, it uses the mapping of entity names and their Wikidata IDs of the entire Wikidata for proper alignment. Our task of extracting relation tuples with the raw entity names from a sentence is more challenging since entity names are not of fixed length. Our more generic approach is also helpful for extracting new entities which are not present in the existing knowledge bases such as Wikidata. We use their N-gram attention mechanism in our model to compare its performance with other attention models (Table TABREF17).",
"We use the same evaluation method used by BIBREF11 (BIBREF11) in their experiments. We consider the extracted tuples as a set and remove the duplicate tuples. An extracted tuple is considered as correct if the corresponding full entity names are correct and the relation is also correct. We report precision, recall, and F1 score for comparison."
],
[
"Among the baselines, HRL achieves significantly higher F1 scores on the two datasets. We run their model and our models five times and report the median results in Table TABREF15. Scores of other baselines in Table TABREF15 are taken from previous published papers BIBREF6, BIBREF11, BIBREF14. Our WordDecoding (WDec) model achieves F1 scores that are $3.9\\%$ and $4.1\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. Similarly, our PtrNetDecoding (PNDec) model achieves F1 scores that are $3.0\\%$ and $1.3\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. We perform a statistical significance test (t-test) under a bootstrap pairing between HRL and our models and see that the higher F1 scores achieved by our models are statistically significant ($p < 0.001$). Next, we combine the outputs of five runs of our models and five runs of HRL to build ensemble models. For a test instance, we include those tuples which are extracted in the majority ($\\ge 3$) of the five runs. This ensemble mechanism increases the precision significantly on both datasets with a small improvement in recall as well. In the ensemble scenario, compared to HRL, WDec achieves $4.2\\%$ and $3.5\\%$ higher F1 scores and PNDec achieves $4.2\\%$ and $2.9\\%$ higher F1 scores on the NYT29 and NYT24 datasets respectively."
],
[
"We include the performance of different attention mechanisms with our WordDecoding model, effects of our masking-based copy mechanism, and ablation results of three variants of the single attention mechanism with our PtrNetDecoding model in Table TABREF17. WordDecoding with single attention achieves the highest F1 score on both datasets. We also see that our copy mechanism improves F1 scores by around 4–7% in each attention mechanism with both datasets. PtrNetDecoding achieves the highest F1 scores when we combine the two attention mechanisms with respect to the previous hidden vector of the decoder LSTM ($\\mathbf {h}_{t-1}^D$) and representation of all previously extracted tuples ($\\mathbf {y}_{prev}$)."
],
[
"From Table TABREF15, we see that CopyR, HRL, and our models achieve significantly higher F1 scores on the NYT24 dataset than the NYT29 dataset. Both datasets have a similar set of relations and similar texts (NYT). So task-wise both datasets should pose a similar challenge. However, the F1 scores suggest that the NYT24 dataset is easier than NYT29. The reason is that NYT24 has around 72.0% of overlapping tuples between the training and test data (% of test tuples that appear in the training data with different source sentences). In contrast, NYT29 has only 41.7% of overlapping tuples. Due to the memorization power of deep neural networks, it can achieve much higher F1 score on NYT24. The difference between the F1 scores of WordDecoding and PtrNetDecoding on NYT24 is marginally higher than NYT29, since WordDecoding has more trainable parameters (about 27 million) than PtrNetDecoding (about 24.5 million) and NYT24 has very high tuple overlap. However, their ensemble versions achieve closer F1 scores on both datasets.",
"Despite achieving marginally lower F1 scores, the pointer network-based model can be considered more intuitive and suitable for this task. WordDecoding may not extract the special tokens and relation tokens at the right time steps, which is critical for finding the tuples from the generated sequence of words. PtrNetDecoding always extracts two entities of varying length and a relation for every tuple. We also observe that PtrNetDecoding is more than two times faster and takes one-third of the GPU memory of WordDecoding during training and inference. This speedup and smaller memory consumption are achieved due to the fewer number of decoding steps of PtrNetDecoding compared to WordDecoding. PtrNetDecoding extracts an entire tuple at each time step, whereas WordDecoding extracts just one word at each time step and so requires eight time steps on average to extract a tuple (assuming that the average length of an entity is two). The softmax operation at the projection layer of WordDecoding is applied across the entire vocabulary and the vocabulary size can be large (more than 40,000 for our datasets). In case of PtrNetDecoding, the softmax operation is applied across the sentence length (maximum of 100 in our experiments) and across the relation set (24 and 29 for our datasets). The costly softmax operation and the higher number of decoding time steps significantly increase the training and inference time for WordDecoding. The encoder-decoder model proposed by BIBREF9 (BIBREF9) faces a similar softmax-related problem as their target vocabulary contains the entire Wikidata entity IDs and relation IDs which is in the millions. HRL, which uses a deep reinforcement learning algorithm, takes around 8x more time to train than PtrNetDecoding with a similar GPU configuration. The speedup and smaller memory consumption will be useful when we move from sentence-level extraction to document-level extraction, since document length is much higher than sentence length and a document contains a higher number of tuples."
],
[
"The relation tuples extracted by a joint model can be erroneous for multiple reasons such as: (i) extracted entities are wrong; (ii) extracted relations are wrong; (iii) pairings of entities with relations are wrong. To see the effects of the first two reasons, we analyze the performance of HRL and our models on entity generation and relation generation separately. For entity generation, we only consider those entities which are part of some tuple. For relation generation, we only consider the relations of the tuples. We include the performance of our two models and HRL on entity generation and relation generation in Table TABREF20. Our proposed models perform better than HRL on both tasks. Comparing our two models, PtrNetDecoding performs better than WordDecoding on both tasks, although WordDecoding achieves higher F1 scores in tuple extraction. This suggests that PtrNetDecoding makes more errors while pairing the entities with relations. We further analyze the outputs of our models and HRL to determine the errors due to ordering of entities (Order), mismatch of the first entity (Ent1), and mismatch of the second entity (Ent2) in Table TABREF21. WordDecoding generates fewer errors than the other two models in all the categories and thus achieves the highest F1 scores on both datasets."
],
[
"Traditionally, researchers BIBREF0, BIBREF1, BIBREF2, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25 used a pipeline approach for relation tuple extraction where relations were identified using a classification network after all entities were detected. BIBREF26 (BIBREF26) used an encoder-decoder model to extract multiple relations present between two given entities.",
"Recently, some researchers BIBREF3, BIBREF4, BIBREF27, BIBREF28 tried to bring these two tasks closer together by sharing their parameters and optimizing them together. BIBREF5 (BIBREF5) used a sequence tagging scheme to jointly extract the entities and relations. BIBREF6 (BIBREF6) proposed an encoder-decoder model with copy mechanism to extract relation tuples with overlapping entities. BIBREF11 (BIBREF11) proposed a joint extraction model based on reinforcement learning (RL). BIBREF14 (BIBREF14) used a graph convolution network (GCN) where they treated each token in a sentence as a node in a graph and edges were considered as relations. BIBREF9 (BIBREF9) used an N-gram attention mechanism with an encoder-decoder model for completion of knowledge bases using distant supervised data.",
"Encoder-decoder models have been used for many NLP applications such as neural machine translation BIBREF29, BIBREF10, BIBREF30, sentence generation from structured data BIBREF31, BIBREF32, and open information extraction BIBREF33, BIBREF34. Pointer networks BIBREF35 have been used to extract a text span from text for tasks such as question answering BIBREF36, BIBREF37. For the first time, we use pointer networks with an encoder-decoder model to extract relation tuples from sentences."
],
[
"Extracting relation tuples from sentences is a challenging task due to different length of entities, the presence of multiple tuples, and overlapping of entities among tuples. In this paper, we propose two novel approaches using encoder-decoder architecture to address this task. Experiments on the New York Times (NYT) corpus show that our proposed models achieve significantly improved new state-of-the-art F1 scores. As future work, we would like to explore our proposed models for a document-level tuple extraction task."
],
[
"We would like to thank the anonymous reviewers for their valuable and constructive comments on this paper."
]
]
} | {
"question": [
"Are there datasets with relation tuples annotated, how big are datasets available?",
"Which one of two proposed approaches performed better in experiments?",
"What is previous work authors reffer to?",
"How higher are F1 scores compared to previous work?"
],
"question_id": [
"735f58e28d84ee92024a36bc348cfac2ee114409",
"710fa8b3e74ee63d2acc20af19f95f7702b7ce5e",
"56123dd42cf5c77fc9a88fc311ed2e1eb672126e",
"1898f999626f9a6da637bd8b4857e5eddf2fc729"
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"We focus on the task of extracting multiple tuples with overlapping entities from sentences. We choose the New York Times (NYT) corpus for our experiments. This corpus has multiple versions, and we choose the following two versions as their test dataset has significantly larger number of instances of multiple relation tuples with overlapping entities. (i) The first version is used by BIBREF6 (BIBREF6) (mentioned as NYT in their paper) and has 24 relations. We name this version as NYT24. (ii) The second version is used by BIBREF11 (BIBREF11) (mentioned as NYT10 in their paper) and has 29 relations. We name this version as NYT29. We select 10% of the original training data and use it as the validation dataset. The remaining 90% is used for training. We include statistics of the training and test datasets in Table TABREF11."
],
"highlighted_evidence": [
"This corpus has multiple versions, and we choose the following two versions as their test dataset has significantly larger number of instances of multiple relation tuples with overlapping entities. (i) The first version is used by BIBREF6 (BIBREF6) (mentioned as NYT in their paper) and has 24 relations. We name this version as NYT24. (ii) The second version is used by BIBREF11 (BIBREF11) (mentioned as NYT10 in their paper) and has 29 relations."
]
}
],
"annotation_id": [
"3fecd676405aba7cae9cf8b1a94afc80c85cfd53"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"WordDecoding (WDec) model"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Among the baselines, HRL achieves significantly higher F1 scores on the two datasets. We run their model and our models five times and report the median results in Table TABREF15. Scores of other baselines in Table TABREF15 are taken from previous published papers BIBREF6, BIBREF11, BIBREF14. Our WordDecoding (WDec) model achieves F1 scores that are $3.9\\%$ and $4.1\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. Similarly, our PtrNetDecoding (PNDec) model achieves F1 scores that are $3.0\\%$ and $1.3\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. We perform a statistical significance test (t-test) under a bootstrap pairing between HRL and our models and see that the higher F1 scores achieved by our models are statistically significant ($p < 0.001$). Next, we combine the outputs of five runs of our models and five runs of HRL to build ensemble models. For a test instance, we include those tuples which are extracted in the majority ($\\ge 3$) of the five runs. This ensemble mechanism increases the precision significantly on both datasets with a small improvement in recall as well. In the ensemble scenario, compared to HRL, WDec achieves $4.2\\%$ and $3.5\\%$ higher F1 scores and PNDec achieves $4.2\\%$ and $2.9\\%$ higher F1 scores on the NYT29 and NYT24 datasets respectively."
],
"highlighted_evidence": [
"Our WordDecoding (WDec) model achieves F1 scores that are $3.9\\%$ and $4.1\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. Similarly, our PtrNetDecoding (PNDec) model achieves F1 scores that are $3.0\\%$ and $1.3\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively."
]
}
],
"annotation_id": [
"c84efada9376d6cca3b26f0747032136ff633762"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"SPTree",
"Tagging",
"CopyR",
"HRL",
"GraphR",
"N-gram Attention"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We compare our model with the following state-of-the-art joint entity and relation extraction models:",
"(1) SPTree BIBREF4: This is an end-to-end neural entity and relation extraction model using sequence LSTM and Tree LSTM. Sequence LSTM is used to identify all the entities first and then Tree LSTM is used to find the relation between all pairs of entities.",
"(2) Tagging BIBREF5: This is a neural sequence tagging model which jointly extracts the entities and relations using an LSTM encoder and an LSTM decoder. They used a Cartesian product of entity tags and relation tags to encode the entity and relation information together. This model does not work when tuples have overlapping entities.",
"(3) CopyR BIBREF6: This model uses an encoder-decoder approach for joint extraction of entities and relations. It copies only the last token of an entity from the source sentence. Their best performing multi-decoder model is trained with a fixed number of decoders where each decoder extracts one tuple.",
"(4) HRL BIBREF11: This model uses a reinforcement learning (RL) algorithm with two levels of hierarchy for tuple extraction. A high-level RL finds the relation and a low-level RL identifies the two entities using a sequence tagging approach. This sequence tagging approach cannot always ensure extraction of exactly two entities.",
"(5) GraphR BIBREF14: This model considers each token in a sentence as a node in a graph, and edges connecting the nodes as relations between them. They use graph convolution network (GCN) to predict the relations of every edge and then filter out some of the relations.",
"(6) N-gram Attention BIBREF9: This model uses an encoder-decoder approach with N-gram attention mechanism for knowledge-base completion using distantly supervised data. The encoder uses the source tokens as its vocabulary and the decoder uses the entire Wikidata BIBREF15 entity IDs and relation IDs as its vocabulary. The encoder takes the source sentence as input and the decoder outputs the two entity IDs and relation ID for every tuple. During training, it uses the mapping of entity names and their Wikidata IDs of the entire Wikidata for proper alignment. Our task of extracting relation tuples with the raw entity names from a sentence is more challenging since entity names are not of fixed length. Our more generic approach is also helpful for extracting new entities which are not present in the existing knowledge bases such as Wikidata. We use their N-gram attention mechanism in our model to compare its performance with other attention models (Table TABREF17)."
],
"highlighted_evidence": [
"We compare our model with the following state-of-the-art joint entity and relation extraction models:\n\n(1) SPTree BIBREF4: This is an end-to-end neural entity and relation extraction model using sequence LSTM and Tree LSTM.",
"(2) Tagging BIBREF5: This is a neural sequence tagging model which jointly extracts the entities and relations using an LSTM encoder and an LSTM decoder.",
"(3) CopyR BIBREF6: This model uses an encoder-decoder approach for joint extraction of entities and relations.",
"(4) HRL BIBREF11: This model uses a reinforcement learning (RL) algorithm with two levels of hierarchy for tuple extraction.",
"(5) GraphR BIBREF14: This model considers each token in a sentence as a node in a graph, and edges connecting the nodes as relations between them.",
"(6) N-gram Attention BIBREF9: This model uses an encoder-decoder approach with N-gram attention mechanism for knowledge-base completion using distantly supervised data."
]
}
],
"annotation_id": [
"5437ff59df4863dd3efe426a80a54c419f65d206"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"WordDecoding (WDec) model achieves F1 scores that are $3.9\\%$ and $4.1\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively",
"PtrNetDecoding (PNDec) model achieves F1 scores that are $3.0\\%$ and $1.3\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Among the baselines, HRL achieves significantly higher F1 scores on the two datasets. We run their model and our models five times and report the median results in Table TABREF15. Scores of other baselines in Table TABREF15 are taken from previous published papers BIBREF6, BIBREF11, BIBREF14. Our WordDecoding (WDec) model achieves F1 scores that are $3.9\\%$ and $4.1\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. Similarly, our PtrNetDecoding (PNDec) model achieves F1 scores that are $3.0\\%$ and $1.3\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. We perform a statistical significance test (t-test) under a bootstrap pairing between HRL and our models and see that the higher F1 scores achieved by our models are statistically significant ($p < 0.001$). Next, we combine the outputs of five runs of our models and five runs of HRL to build ensemble models. For a test instance, we include those tuples which are extracted in the majority ($\\ge 3$) of the five runs. This ensemble mechanism increases the precision significantly on both datasets with a small improvement in recall as well. In the ensemble scenario, compared to HRL, WDec achieves $4.2\\%$ and $3.5\\%$ higher F1 scores and PNDec achieves $4.2\\%$ and $2.9\\%$ higher F1 scores on the NYT29 and NYT24 datasets respectively."
],
"highlighted_evidence": [
"Our WordDecoding (WDec) model achieves F1 scores that are $3.9\\%$ and $4.1\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. Similarly, our PtrNetDecoding (PNDec) model achieves F1 scores that are $3.0\\%$ and $1.3\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively."
]
},
{
"unanswerable": false,
"extractive_spans": [
"Our WordDecoding (WDec) model achieves F1 scores that are $3.9\\%$ and $4.1\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively",
"In the ensemble scenario, compared to HRL, WDec achieves $4.2\\%$ and $3.5\\%$ higher F1 scores"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Among the baselines, HRL achieves significantly higher F1 scores on the two datasets. We run their model and our models five times and report the median results in Table TABREF15. Scores of other baselines in Table TABREF15 are taken from previous published papers BIBREF6, BIBREF11, BIBREF14. Our WordDecoding (WDec) model achieves F1 scores that are $3.9\\%$ and $4.1\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. Similarly, our PtrNetDecoding (PNDec) model achieves F1 scores that are $3.0\\%$ and $1.3\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. We perform a statistical significance test (t-test) under a bootstrap pairing between HRL and our models and see that the higher F1 scores achieved by our models are statistically significant ($p < 0.001$). Next, we combine the outputs of five runs of our models and five runs of HRL to build ensemble models. For a test instance, we include those tuples which are extracted in the majority ($\\ge 3$) of the five runs. This ensemble mechanism increases the precision significantly on both datasets with a small improvement in recall as well. In the ensemble scenario, compared to HRL, WDec achieves $4.2\\%$ and $3.5\\%$ higher F1 scores and PNDec achieves $4.2\\%$ and $2.9\\%$ higher F1 scores on the NYT29 and NYT24 datasets respectively."
],
"highlighted_evidence": [
"Among the baselines, HRL achieves significantly higher F1 scores on the two datasets.",
"Our WordDecoding (WDec) model achieves F1 scores that are $3.9\\%$ and $4.1\\%$ higher than HRL on the NYT29 and NYT24 datasets respectively.",
"In the ensemble scenario, compared to HRL, WDec achieves $4.2\\%$ and $3.5\\%$ higher F1 scores and PNDec achieves $4.2\\%$ and $2.9\\%$ higher F1 scores on the NYT29 and NYT24 datasets respectively."
]
}
],
"annotation_id": [
"4343a22d92f83bdd38de26e2a06dbdf561f4271f",
"585bf4e35871a8977533917335a6687244450f46"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Table 1: Relation tuple representation for encoder-decoder models.",
"Figure 1: The architecture of an encoder-decoder model (left) and a pointer network-based decoder block (right).",
"Table 2: Statistics of train/test split of the two datasets.",
"Table 4: Ablation of attention mechanisms with WordDecoding (WDec) and PtrNetDecoding (PNDec) model.",
"Table 3: Performance comparison on the two datasets.",
"Table 5: Comparison on entity and relation generation tasks.",
"Table 6: % errors for wrong ordering and entity mismatch."
],
"file": [
"2-Table1-1.png",
"3-Figure1-1.png",
"5-Table2-1.png",
"6-Table4-1.png",
"6-Table3-1.png",
"7-Table5-1.png",
"7-Table6-1.png"
]
} |
1611.01400 | Learning to Rank Scientific Documents from the Crowd | Finding related published articles is an important task in any science, but with the explosion of new work in the biomedical domain it has become especially challenging. Most existing methodologies use text similarity metrics to identify whether two articles are related or not. However biomedical knowledge discovery is hypothesis-driven. The most related articles may not be ones with the highest text similarities. In this study, we first develop an innovative crowd-sourcing approach to build an expert-annotated document-ranking corpus. Using this corpus as the gold standard, we then evaluate the approaches of using text similarity to rank the relatedness of articles. Finally, we develop and evaluate a new supervised model to automatically rank related scientific articles. Our results show that authors' ranking differ significantly from rankings by text-similarity-based models. By training a learning-to-rank model on a subset of the annotated corpus, we found the best supervised learning-to-rank model (SVM-Rank) significantly surpassed state-of-the-art baseline systems. | {
"section_name": [
null,
"Introduction",
"Benchmark Datasets",
"Learning to Rank",
"Features",
"Baseline Systems",
"Evaluation Measures",
"Forward Feature Selection",
"Results",
"Discussion",
"Acknowledgments"
],
"paragraphs": [
[
"[block]I.1em",
"[block]i.1em",
" Learning to Rank Scientific Documents from the CrowdLearning to Rank Scientific Documents from the Crowd ",
"-4",
"[1]1"
],
[
"The number of biomedical research papers published has increased dramatically in recent years. As of October, 2016, PubMed houses over 26 million citations, with almost 1 million from the first 3 quarters of 2016 alone . It has become impossible for any one person to actually read all of the work being published. We require tools to help us determine which research articles would be most informative and related to a particular question or document. For example, a common task when reading articles is to find articles that are most related to another. Major research search engines offer such a “related articles” feature. However, we propose that instead of measuring relatedness by text-similarity measures, we build a model that is able to infer relatedness from the authors' judgments.",
" BIBREF0 consider two kinds of queries important to bibliographic information retrieval: the first is a search query written by the user and the second is a request for documents most similar to a document already judged relevant by the user. Such a query-by-document (or query-by-example) system has been implemented in the de facto scientific search engine PubMed—called Related Citation Search. BIBREF1 show that 19% of all PubMed searches performed by users have at least one click on a related article. Google Scholar provides a similar Related Articles system. Outside of bibliographic retrieval, query-by-document systems are commonly used for patent retrieval, Internet search, and plagiarism detection, amongst others. Most work in the area of query-by-document uses text-based similarity measures ( BIBREF2 , BIBREF3 , BIBREF4 ). However, scientific research is hypothesis driven and therefore we question whether text-based similarity alone is the best model for bibliographic retrieval. In this study we asked authors to rank documents by “closeness” to their work. The definition of “closeness” was left for the authors to interpret, as the goal is to model which documents the authors subjectively feel are closest to their own. Throughout the paper we will use “closeness” and “relatedness” interchangeably.",
"We found that researchers' ranking by closeness differs significantly from the ranking provided by a traditional IR system. Our contributions are three fold:",
"The principal ranking algorithms of query-by-document in bibliographic information retrieval rely mainly on text similarity measures ( BIBREF1 , BIBREF0 ). For example, the foundational work of BIBREF0 introduced the concept of a “document neighborhood” in which they pre-compute a text-similarity based distance between each pair of documents. When a user issues a query, first an initial set of related documents is retrieved. Then, the neighbors of each of those documents is retrieved, i.e., documents with the highest text similarity to those in the initial set. In a later work, BIBREF1 develop the PMRA algorithm for PubMed related article search. PMRA is an unsupervised probabilistic topic model that is trained to model “relatedness” between documents. BIBREF5 introduce the competing algorithm Find-Similar for this task, treating the full text of documents as a query and selecting related documents from the results.",
"Outside bibliographic IR, prior work in query-by-document includes patent retrieval ( BIBREF6 , BIBREF3 ), finding related documents given a manuscript ( BIBREF1 , BIBREF7 ), and web page search ( BIBREF8 , BIBREF9 ). Much of the work focuses on generating shorter queries from the lengthy document. For example, noun-phrase extraction has been used for extracting short, descriptive phrases from the original lengthy text ( BIBREF10 ). Topic models have been used to distill a document into a set of topics used to form query ( BIBREF11 ). BIBREF6 generated queries using the top TF*IDF weighted terms in each document. BIBREF4 suggested extracting phrasal concepts from a document, which are then used to generate queries. BIBREF2 combined query extraction and pseudo-relevance feedback for patent retrieval. BIBREF9 employ supervised machine learning model (i.e., Conditional Random Fields) ( BIBREF12 ) for query generation. BIBREF13 explored ontology to identify chemical concepts for queries.",
"There are also many biomedical-document specific search engines available. Many information retrieval systems focus on question answering systems such as those developed for the TREC Genomics Track ( BIBREF14 ) or BioASQ Question-Answer ( BIBREF15 ) competitions. Systems designed for question-answering use a combination of natural language processing techniques to identify biomedical entities, and then information retrieval systems to extract relevant answers to questions. Systems like those detailed in BIBREF16 can provide answers to yes/no biomedical questions with high precision. However what we propose differs from these systems in a fundamental way: given a specific document, suggest the most important documents that are related to it.",
"The body of work most related to ours is that of citation recommendation. The goal of citation recommendation is to suggest a small number of publications that can be used as high quality references for a particular article ( BIBREF17 , BIBREF1 ). Topic models have been used to rank articles based on the similarity of latent topic distribution ( BIBREF11 , BIBREF18 , BIBREF1 ). These models attempt to decompose a document into a few important keywords. Specifically, these models attempt to find a latent vector representation of a document that has a much smaller dimensionality than the document itself and compare the reduced dimension vectors.",
"Citation networks have also been explored for ranking articles by importance, i.e., authority ( BIBREF19 , BIBREF20 ). BIBREF17 introduced heterogeneous network models, called meta-path based models, to incorporate venues (the conference where a paper is published) and content (the term which links two articles, for citation recommendation). Another highly relevant work is BIBREF8 who decomposed a document to represent it with a compact vector, which is then used to measure the similarity with other documents. Note that we exclude the work of context-aware recommendation, which analyze each citation's local context, which is typically short and does not represent a full document.",
"One of the key contributions of our study is an innovative approach for automatically generating a query-by-document gold standard. Crowd-sourcing has generated large databases, including Wikipedia and Freebase. Recently, BIBREF21 concluded that unpaid participants performed better than paid participants for question answering. They attribute this to unpaid participants being more intrinsically motivated than the paid test takers: they performed the task for fun and already had knowledge about the subject being tested. In contrast, another study, BIBREF22 , compared unpaid workers found through Google Adwords (GA) to paid workers found through Amazon Mechanical Turk (AMT). They found that the paid participants from AMT outperform the unpaid ones. This is attributed to the paid workers being more willing to look up information they didn't know. In the bibliographic domain, authors of scientific publications have contributed annotations ( BIBREF23 ). They found that authors are more willing to annotate their own publications ( BIBREF23 ) than to annotate other publications ( BIBREF24 ) even though they are paid. In this work, our annotated dataset was created by the unpaid authors of the articles."
],
[
"In order to develop and evaluate ranking algorithms we need a benchmark dataset. However, to the best of our knowledge, we know of no openly available benchmark dataset for bibliographic query-by-document systems. We therefore created such a benchmark dataset.",
"The creation of any benchmark dataset is a daunting labor-intensive task, and in particular, challenging in the scientific domain because one must master the technical jargon of a scientific article, and such experts are not easy to find when using traditional crowd-sourcing technologies (e.g., AMT). For our task, the ideal annotator for each of our articles are the authors themselves. The authors of a publication typically have a clear knowledge of the references they cite and their scientific importance to their publication, and therefore may be excellent judges for ranking the reference articles.",
"Given the full text of a scientific publication, we want to rank its citations according to the author's judgments. We collected recent publications from the open-access PLoS journals and asked the authors to rank by closeness five citations we selected from their paper. PLoS articles were selected because its journals cover a wide array of topics and the full text articles are available in XML format. We selected the most recent publications as previous work in crowd-sourcing annotation shows that authors' willingness to participate in an unpaid annotation task declines with the age of publication ( BIBREF23 ). We then extracted the abstract, citations, full text, authors, and corresponding author email address from each document. The titles and abstracts of the citations were retrieved from PubMed, and the cosine similarity between the PLoS abstract and the citation's abstract was calculated. We selected the top five most similar abstracts using TF*IDF weighted cosine similarity, shuffled their order, and emailed them to the corresponding author for annotation. We believe that ranking five articles (rather than the entire collection of the references) is a more manageable task for an author compared to asking them to rank all references. Because the documents to be annotated were selected based on text similarity, they also represent a challenging baseline for models based on text-similarity features. In total 416 authors were contacted, and 92 responded (22% response rate). Two responses were removed from the dataset for incomplete annotation.",
"We asked authors to rank documents by how “close to your work” they were. The definition of closeness was left to the discretion of the author. The dataset is composed of 90 annotated documents with 5 citations each ranked 1 to 5, where 1 is least relevant and 5 is most relevant for a total of 450 annotated citations."
],
[
"Learning-to-rank is a technique for reordering the results returned from a search engine query. Generally, the initial query to a search engine is concerned more with recall than precision: the goal is to obtain a subset of potentially related documents from the corpus. Then, given this set of potentially related documents, learning-to-rank algorithms reorder the documents such that the most relevant documents appear at the top of the list. This process is illustrated in Figure FIGREF6 .",
"There are three basic types of learning-to-rank algorithms: point-wise, pair-wise, and list-wise. Point-wise algorithms assign a score to each retrieved document and rank them by their scores. Pair-wise algorithms turn learning-to-rank into a binary classification problem, obtaining a ranking by comparing each individual pair of documents. List-wise algorithms try to optimize an evaluation parameter over all queries in the dataset.",
"Support Vector Machine (SVM) ( BIBREF25 ) is a commonly used supervised classification algorithm that has shown good performance over a range of tasks. SVM can be thought of as a binary linear classifier where the goal is to maximize the size of the gap between the class-separating line and the points on either side of the line. This helps avoid over-fitting on the training data. SVMRank is a modification to SVM that assigns scores to each data point and allows the results to be ranked ( BIBREF26 ). We use SVMRank in the experiments below. SVMRank has previously been used in the task of document retrieval in ( BIBREF27 ) for a more traditional short query task and has been shown to be a top-performing system for ranking.",
"SVMRank is a point-wise learning-to-rank algorithm that returns scores for each document. We rank the documents by these scores. It is possible that sometimes two documents will have the same score, resulting in a tie. In this case, we give both documents the same rank, and then leave a gap in the ranking. For example, if documents 2 and 3 are tied, their ranked list will be [5, 3, 3, 2, 1].",
"Models are trained by randomly splitting the dataset into 70% training data and 30% test data. We apply a random sub-sampling approach where the dataset is randomly split, trained, and tested 100 times due to the relatively small size of the data. A model is learned for each split and a ranking is produced for each annotated document.",
"We test three different supervised models. The first supervised model uses only text similarity features, the second model uses all of the features, and the third model runs forward feature selection to select the best performing combination of features. We also test using two different models trained on two different datasets: one trained using the gold standard annotations, and another trained using the judgments based on text similarity that were used to select the citations to give to the authors.",
"We tested several different learning to rank algorithms for this work. We found in preliminary testing that SVMRank had the best performance, so it will be used in the following experiments."
],
[
"Each citation is turned into a feature vector representing the relationship between the published article and the citation. Four types of features are used: text similarity, citation count and location, age of the citation, and the number of times the citation has appeared in the literature (citation impact). Text similarity features measure the similarity of the words used in different parts of the document. In this work, we calculate the similarity between a document INLINEFORM0 and a document it cites INLINEFORM1 by transforming the their text into term vectors. For example, to calculate the similarity of the abstracts between INLINEFORM2 and INLINEFORM3 we transform the abstracts into two term vectors, INLINEFORM4 and INLINEFORM5 . The length of each of the term vectors is INLINEFORM6 . We then weight each word by its Term-frequency * Inverse-document frequency (TF*IDF) weight. TF*IDF is a technique to give higher weight to words that appear frequently in a document but infrequently in the corpus. Term frequency is simply the number of times that a word INLINEFORM7 appears in a document. Inverse-document frequency is the logarithmically-scaled fraction of documents in the corpus in which the word INLINEFORM8 appears. Or, more specifically: INLINEFORM9 ",
"where INLINEFORM0 is the total number of documents in the corpus, and the denominator is the number of documents in which a term INLINEFORM1 appears in the corpus INLINEFORM2 . Then, TF*IDF is defined as: INLINEFORM3 ",
"where INLINEFORM0 is a term, INLINEFORM1 is the document, and INLINEFORM2 is the corpus. For example, the word “the” may appear often in a document, but because it also appears in almost every document in the corpus it is not useful for calculating similarity, thus it receives a very low weight. However, a word such as “neurogenesis” may appear often in a document, but does not appear frequently in the corpus, and so it receives a high weight. The similarity between term vectors is then calculated using cosine similarity: INLINEFORM3 ",
"where INLINEFORM0 and INLINEFORM1 are two term vectors. The cosine similarity is a measure of the angle between the two vectors. The smaller the angle between the two vectors, i.e., the more similar they are, then the closer the value is to 1. Conversely, the more dissimilar the vectors, the closer the cosine similarity is to 0.",
"We calculate the text similarity between several different sections of the document INLINEFORM0 and the document it cites INLINEFORM1 . From the citing article INLINEFORM2 , we use the title, full text, abstract, the combined discussion/conclusion sections, and the 10 words on either side of the place in the document where the actual citation occurs. From the document it cites INLINEFORM3 we only use the title and the abstract due to limited availability of the full text. In this work we combine the discussion and conclusion sections of each document because some documents have only a conclusion section, others have only a discussion, and some have both. The similarity between each of these sections from the two documents is calculated and used as features in the model.",
"The age of the citation may be relevant to its importance. As a citation ages, we hypothesize that it is more likely to become a “foundational” citation rather than one that directly influenced the development of the article. Therefore more recent citations may be more likely relevant to the article. Similarly, “citation impact”, that is, the number of times a citation has appeared in the literature (as measured by Google Scholar) may be an indicator of whether or not an article is foundational rather than directly related. We hypothesize that the fewer times an article is cited in the literature, the more impact it had on the article at hand.",
"We also keep track of the number of times a citation is mentioned in both the full text and discussion/conclusion sections. We hypothesize that if a citation is mentioned multiple times, it is more important than citations that are mentioned only once. Further, citations that appear in the discussion/conclusion sections are more likely to be crucial to understanding the results. We normalize the counts of the citations by the total number of citations in that section. In total we select 15 features, shown in Table TABREF15 . The features are normalized within each document so that each of citation features is on a scale from 0 to 1, and are evenly distributed within that range. This is done because some of the features (such as years since citation) are unbounded."
],
[
"We compare our system to a variety of baselines. (1) Rank by the number of times a citation is mentioned in the document. (2) Rank by the number of times the citation is cited in the literature (citation impact). (3) Rank using Google Scholar Related Articles. (4) Rank by the TF*IDF weighted cosine similarity. (5) Rank using a learning-to-rank model trained on text similarity rankings. The first two baseline systems are models where the values are ordered from highest to lowest to generate the ranking. The idea behind them is that the number of times a citation is mentioned in an article, or the citation impact may already be good indicators of their closeness. The text similarity model is trained using the same features and methods used by the annotation model, but trained using text similarity rankings instead of the author's judgments.",
"We also compare our rankings to those found on the popular scientific article search engine Google Scholar. Google Scholar is a “black box” IR system: they do not release details about which features they are using and how they judge relevance of documents. Google Scholar provides a “Related Articles” feature for each document in its index that shows the top 100 related documents for each article. To compare our rankings, we search through these related documents and record the ranking at which each of the citations we selected appeared. We scale these rankings such that the lowest ranked article from Google Scholar has the highest relevance ranking in our set. If the cited document does not appear in the set, we set its relevance-ranking equal to one below the lowest relevance ranking found.",
"Four comparisons are performed with the Google Scholar data. (1) We first train a model using our gold standard and see if we can predict Google Scholar's ranking. (2) We compare to a baseline of using Google Scholar's rankings to train and compare with their own rankings using our feature set. (3) Then we train a model using Google Scholar's rankings and try to predict our gold standard. (4) We compare it to the model trained on our gold standard to predict our gold standard."
],
[
"Normalized Discounted Cumulative Gain (NDCG) is a common measure for comparing a list of estimated document relevance judgments with a list of known judgments ( BIBREF28 ). To calculate NDCG we first calculate a ranking's Discounted Cumulative Gain (DCG) as: DISPLAYFORM0 ",
"where rel INLINEFORM0 is the relevance judgment at position INLINEFORM1 . Intuitively, DCG penalizes retrieval of documents that are not relevant (rel INLINEFORM2 ). However, DCG is an unbounded value. In order to compare the DCG between two models, we must normalize it. To do this, we use the ideal DCG (IDCG), i.e., the maximum possible DCG given the relevance judgments. The maximum possible DCG occurs when the relevance judgments are in the correct order. DISPLAYFORM0 ",
"The NDCG value is in the range of 0 to 1, where 0 means that no relevant documents were retrieved, and 1 means that the relevant documents were retrieved and in the correct order of their relevance judgments.",
"Kendall's INLINEFORM0 is a measure of the correlation between two ranked lists. It compares the number of concordant pairs with the number of discordant pairs between each list. A concordant pair is defined over two observations INLINEFORM1 and INLINEFORM2 . If INLINEFORM3 and INLINEFORM4 , then the pair at indices INLINEFORM5 is concordant, that is, the ranking at INLINEFORM6 in both ranking sets INLINEFORM7 and INLINEFORM8 agree with each other. Similarly, a pair INLINEFORM9 is discordant if INLINEFORM10 and INLINEFORM11 or INLINEFORM12 and INLINEFORM13 . Kendall's INLINEFORM14 is then defined as: DISPLAYFORM0 ",
"where C is the number of concordant pairs, D is the number of discordant pairs, and the denominator represents the total number of possible pairs. Thus, Kendall's INLINEFORM0 falls in the range of INLINEFORM1 , where -1 means that the ranked lists are perfectly negatively correlated, 0 means that they are not significantly correlated, and 1 means that the ranked lists are perfectly correlated. One downside of this measure is that it does not take into account where in the ranked list an error occurs. Information retrieval, in general, cares more about errors near the top of the list rather than errors near the bottom of the list.",
"Average-Precision INLINEFORM0 ( BIBREF29 ) (or INLINEFORM1 ) extends on Kendall's INLINEFORM2 by incorporating the position of errors. If an error occurs near the top of the list, then that is penalized heavier than an error occurring at the bottom of the list. To achieve this, INLINEFORM3 incorporates ideas from the popular Average Precision measure, were we calculate the precision at each index of the list and then average them together. INLINEFORM4 is defined as: DISPLAYFORM0 ",
"Intuitively, if an error occurs at the top of the list, then that error is propagated into each iteration of the summation, meaning that it's penalty is added multiple times. INLINEFORM0 's range is between -1 and 1, where -1 means the lists are perfectly negatively correlated, 0 means that they are not significantly correlated, and 1 means that they are perfectly correlated."
],
[
"Forward feature selection was performed by iteratively testing each feature one at a time. The highest performing feature is kept in the model, and another sweep is done over the remaining features. This continues until all features have been selected. This approach allows us to explore the effect of combinations of features and the effect of having too many or too few features. It also allows us to evaluate which features and combinations of features are the most powerful."
],
[
"We first compare our gold standard to the baselines. A random baseline is provided for reference. Because all of the documents that we rank are relevant, NDCG will be fairly high simply by chance. We find that the number of times a document is mentioned in the annotated document is significantly better than the random baseline or the citation impact. The more times a document is mentioned in a paper, the more likely the author was to annotate it as important. Interestingly, we see a negative correlation with the citation impact. The more times a document is mentioned in the literature, the less likely it is to be important. These results are shown in Table TABREF14 .",
"Next we rank the raw values of the features and compare them to our gold standard to obtain a baseline (Table TABREF15 ). The best performing text similarity feature is the similarity between the abstract of the annotated document and the abstract of the cited document. However, the number of times that a cited document is mentioned in the text of the annotated document are also high-scoring features, especially in the INLINEFORM0 correlation coefficient. These results indicate that text similarity alone may not be a good measure for judging the rank of a document.",
"Next we test three different feature sets for our supervised learning-to-rank models. The model using only the text similarity features performs poorly: NDCG stays at baseline and the correlation measures are low. Models that incorporate information about the age, number of times a cited document was referenced, and the citation impact of that document in addition to the text similarity features significantly outperformed models that used only text similarity features INLINEFORM0 . Because INLINEFORM1 takes into account the position in the ranking of the errors, this indicates that the All Features model was able to better correctly place highly ranked documents above lower ranked ones. Similarly, because Kendall's INLINEFORM2 is an overall measure of correlation that does not take into account the position of errors, the higher value here means that more rankings were correctly placed. Interestingly, feature selection (which is optimized for NDCG) does not outperform the model using all of the features in terms of our correlation measures. The features chosen during forward feature selection are (1) the citation impact, (2) number of mentions in the full text, (3) text similarity between the annotated document's title and the referenced document's abstract, (4) the text similarity between the annotated document's discussion/conclusion section and the referenced document's title. These results are shown in Table TABREF16 . The models trained on the text similarity judgments perform worse than the models trained on the annotated data. However, in terms of both NDCG and the correlation measures, they perform significantly better than the random baseline.",
"Next we compare our model to Google Scholar's rankings. Using the ranking collected from Google Scholar, we build a training set to try to predict our authors' rankings. We find that Google Scholar performs similarly to the text-only features model. This indicates that the rankings we obtained from the authors are substantially different than the rankings that Google Scholar provides. Results appear in Table TABREF17 ."
],
[
"We found that authors rank the references they cite substantially differently from rankings based on text-similarity. Our results show that decomposing a document into a set of features that is able to capture that difference is key. While text similarity is indeed important (as evidenced by the Similarity(a,a) feature in Table TABREF15 ), we also found that the number of times a document is referenced in the text and the number of times a document is referenced in the literature are also both important features (via feature selection). The more often a citation is mentioned in the text, the more likely it is to be important. This feature is often overlooked in article citation recommendation. We also found that recency is important: the age of the citation is negatively correlated with the rank. Newer citations are more likely to be directly important than older, more foundational citations. Additionally, the number of times a document is cited in the literature is negatively correlated with rank. This is likely due to highly cited documents being more foundational works; they may be older papers that are important to the field but not directly influential to the new work.",
"The model trained using the author's judgments does significantly better than the model trained using the text-similarity-based judgments. An error analysis was performed to find out why some of the rankings disagreed with the author's annotations. We found that in some cases our features were unable to capture the relationship: for example a biomedical document applying a model developed in another field to the dataset may use very different language to describe the model than the citation. Previous work adopting topic models to query document search may prove useful for such cases.",
"A small subset of features ended up performing as well as the full list of features. The number of times a citation was mentioned and the citation impact score in the literature ended up being two of the most important features. Indeed, without the citation-based features, the model performs as though it were trained with the text-similarity rankings. Feature engineering is a part of any learning-to-rank system, especially in domain-specific contexts. Citations are an integral feature of our dataset. For learning-to-rank to be applied to other datasets feature engineering must also occur to exploit the unique properties of those datasets. However, we show that combining the domain-specific features with more traditional text-based features does improve the model's scores over simply using the domain-specific features themselves.",
"Interestingly, citation impact and age of the citation are both negatively correlated with rank. We hypothesize that this is because both measures can be indicators of recency: a new publication is more likely to be directly influenced by more recent work. Many other related search tools, however, treat the citation impact as a positive feature of relatedness: documents with a higher citation impact appear higher on the list of related articles than those with lower citation impacts. This may be the opposite of what the user actually desires.",
"We also found that rankings from our text-similarity based IR system or Google Scholar's IR system were unable to rank documents by the authors' annotations as well as our system. In one sense, this is reasonable: the rankings coming from these systems were from a different system than the author annotations. However, in domain-specific IR, domain experts are the best judges. We built a system that exploits these expert judgments. The text similarity and Google Scholar models were able to do this to some extent, performing above the random baseline, but not on the level of our model.",
"Additionally, we observe that NDCG may not be the most appropriate measure for comparing short ranked lists where all of the documents are relevant to some degree. NDCG gives a lot of credit to relevant documents that occur in the highest ranks. However, all of the documents here are relevant, just to varying degrees. Thus, NDCG does not seem to be the most appropriate measure, as is evident in our scores. The correlation coefficients from Kendall's INLINEFORM0 and INLINEFORM1 seem to be far more appropriate for this case, as they are not concerned with relevance, only ranking.",
"One limitation of our work is that we selected a small set of references based on their similarities to the article that cites them. Ideally, we would have had authors rank all of their citations for us, but this would have been a daunting task for authors to perform. We chose to use the Google Scholar dataset in order to attempt to mitigate this: we obtain a ranking for the set of references from a system that is also ranking many other documents. The five citations selected by TF*IDF weighted cosine similarity represent a “hard” gold standard: we are attempting to rank documents that are known to all be relevant by their nature, and have high similarity with the text. Additionally, there are plethora of other, more expensive features we could explore to improve the model. Citation network features, phrasal concepts, and topic models could all be used to help improve our results, at the cost of computational complexity.",
"We have developed a model for fast related-document ranking based on crowd-sourced data. The model, data, and data collection software are all publicly available and can easily be used in future applications as an automatic search to help users find the most important citations given a particular document. The experimental setup is portable to other datasets with some feature engineering. We were able to identify that several domain-specific features were crucial to our model, and that we were able to improve on the results of simply using those features alone by adding more traditional features.",
"Query-by-document is a complicated and challenging task. We provide an approach with an easily obtained dataset and a computationally inexpensive model. By working with biomedical researchers we were able to build a system that ranks documents in a quantitatively different way than previous systems, and to provide a tool that helps researchers find related documents."
],
[
"We would like to thank all of the authors who took the time to answer our citation ranking survey. This work is supported by National Institutes of Health with the grant number 1R01GM095476. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."
]
]
} | {
"question": [
"what were the baselines?",
"what is the supervised model they developed?",
"what is the size of this built corpus?",
"what crowdsourcing platform is used?"
],
"question_id": [
"d32b6ac003cfe6277f8c2eebc7540605a60a3904",
"c10f38ee97ed80484c1a70b8ebba9b1fb149bc91",
"340501f23ddc0abe344a239193abbaaab938cc3a",
"fbb85cbd41de6d2818e77e8f8d4b91e431931faa"
],
"nlp_background": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Rank by the number of times a citation is mentioned in the document",
" Rank by the number of times the citation is cited in the literature (citation impact). ",
"Rank using Google Scholar Related Articles.",
"Rank by the TF*IDF weighted cosine similarity. ",
"ank using a learning-to-rank model trained on text similarity rankings"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We compare our system to a variety of baselines. (1) Rank by the number of times a citation is mentioned in the document. (2) Rank by the number of times the citation is cited in the literature (citation impact). (3) Rank using Google Scholar Related Articles. (4) Rank by the TF*IDF weighted cosine similarity. (5) Rank using a learning-to-rank model trained on text similarity rankings. The first two baseline systems are models where the values are ordered from highest to lowest to generate the ranking. The idea behind them is that the number of times a citation is mentioned in an article, or the citation impact may already be good indicators of their closeness. The text similarity model is trained using the same features and methods used by the annotation model, but trained using text similarity rankings instead of the author's judgments."
],
"highlighted_evidence": [
"We compare our system to a variety of baselines. (1) Rank by the number of times a citation is mentioned in the document. (2) Rank by the number of times the citation is cited in the literature (citation impact). (3) Rank using Google Scholar Related Articles. (4) Rank by the TF*IDF weighted cosine similarity. (5) Rank using a learning-to-rank model trained on text similarity rankings. The first two baseline systems are models where the values are ordered from highest to lowest to generate the ranking. The idea behind them is that the number of times a citation is mentioned in an article, or the citation impact may already be good indicators of their closeness. The text similarity model is trained using the same features and methods used by the annotation model, but trained using text similarity rankings instead of the author's judgments."
]
},
{
"unanswerable": false,
"extractive_spans": [
"(1) Rank by the number of times a citation is mentioned in the document.",
"(2) Rank by the number of times the citation is cited in the literature (citation impact).",
"(3) Rank using Google Scholar Related Articles.",
"(4) Rank by the TF*IDF weighted cosine similarity.",
"(5) Rank using a learning-to-rank model trained on text similarity rankings."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We compare our system to a variety of baselines. (1) Rank by the number of times a citation is mentioned in the document. (2) Rank by the number of times the citation is cited in the literature (citation impact). (3) Rank using Google Scholar Related Articles. (4) Rank by the TF*IDF weighted cosine similarity. (5) Rank using a learning-to-rank model trained on text similarity rankings. The first two baseline systems are models where the values are ordered from highest to lowest to generate the ranking. The idea behind them is that the number of times a citation is mentioned in an article, or the citation impact may already be good indicators of their closeness. The text similarity model is trained using the same features and methods used by the annotation model, but trained using text similarity rankings instead of the author's judgments."
],
"highlighted_evidence": [
"We compare our system to a variety of baselines. (1) Rank by the number of times a citation is mentioned in the document. (2) Rank by the number of times the citation is cited in the literature (citation impact). (3) Rank using Google Scholar Related Articles. (4) Rank by the TF*IDF weighted cosine similarity. (5) Rank using a learning-to-rank model trained on text similarity rankings."
]
}
],
"annotation_id": [
"5f74a7bcfb0ffcf1ed7099c1510ff6f23957461e",
"93d37b33ab04b5abbf6a221b262a930e1e8fe7ae"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"SVMRank"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Support Vector Machine (SVM) ( BIBREF25 ) is a commonly used supervised classification algorithm that has shown good performance over a range of tasks. SVM can be thought of as a binary linear classifier where the goal is to maximize the size of the gap between the class-separating line and the points on either side of the line. This helps avoid over-fitting on the training data. SVMRank is a modification to SVM that assigns scores to each data point and allows the results to be ranked ( BIBREF26 ). We use SVMRank in the experiments below. SVMRank has previously been used in the task of document retrieval in ( BIBREF27 ) for a more traditional short query task and has been shown to be a top-performing system for ranking."
],
"highlighted_evidence": [
"SVMRank is a modification to SVM that assigns scores to each data point and allows the results to be ranked ( BIBREF26 ). We use SVMRank in the experiments below. "
]
}
],
"annotation_id": [
"b926478872dd0d25b50b2be14a4d2500deda01d1"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"90 annotated documents with 5 citations each ranked 1 to 5, where 1 is least relevant and 5 is most relevant for a total of 450 annotated citations"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We asked authors to rank documents by how “close to your work” they were. The definition of closeness was left to the discretion of the author. The dataset is composed of 90 annotated documents with 5 citations each ranked 1 to 5, where 1 is least relevant and 5 is most relevant for a total of 450 annotated citations."
],
"highlighted_evidence": [
"The dataset is composed of 90 annotated documents with 5 citations each ranked 1 to 5, where 1 is least relevant and 5 is most relevant for a total of 450 annotated citations."
]
}
],
"annotation_id": [
"64a9f2f8d5c94eef5888e4daf18010f4d6d7a8d0"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"asked the authors to rank by closeness five citations we selected from their paper"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Given the full text of a scientific publication, we want to rank its citations according to the author's judgments. We collected recent publications from the open-access PLoS journals and asked the authors to rank by closeness five citations we selected from their paper. PLoS articles were selected because its journals cover a wide array of topics and the full text articles are available in XML format. We selected the most recent publications as previous work in crowd-sourcing annotation shows that authors' willingness to participate in an unpaid annotation task declines with the age of publication ( BIBREF23 ). We then extracted the abstract, citations, full text, authors, and corresponding author email address from each document. The titles and abstracts of the citations were retrieved from PubMed, and the cosine similarity between the PLoS abstract and the citation's abstract was calculated. We selected the top five most similar abstracts using TF*IDF weighted cosine similarity, shuffled their order, and emailed them to the corresponding author for annotation. We believe that ranking five articles (rather than the entire collection of the references) is a more manageable task for an author compared to asking them to rank all references. Because the documents to be annotated were selected based on text similarity, they also represent a challenging baseline for models based on text-similarity features. In total 416 authors were contacted, and 92 responded (22% response rate). Two responses were removed from the dataset for incomplete annotation."
],
"highlighted_evidence": [
"Given the full text of a scientific publication, we want to rank its citations according to the author's judgments. We collected recent publications from the open-access PLoS journals and asked the authors to rank by closeness five citations we selected from their paper."
]
}
],
"annotation_id": [
"f315bf40a95f238f0fc6806cf70cc3caece88616"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Figure 1: The basic pipeline of a learning-to-rank system. An initial set of results for a query is retrieved from a search engine, and then that subset is reranked. During the reranking phase new features may be extracted.",
"Table 1: Results for the citation baselines. The number of times a citation is mentioned in the document is a better indicator of rank than the citation impact.",
"Table 2: Results for ranking by each individual feature value. Similarity features are text similarity features. The first parameter is the section of text in the annotated document, the second parameter is the section of text in the referenced document. Here, “a” means abstract, “t” means title, “f” means full text, “c” means the 10 word window around a citation, “d” means the discussion/conclusion sections, and “cd” means 10 word windows around citations in the discussion/conclusion section. Age is the age of the referenced document, MentionCount is the number of times the annotated document mentions the referenced document in text, and CitationImpact is the number of documents that have cited the referenced document in the literature.",
"Table 3: Results for the SVMRank models for three different combinations of features. “Text Only Features” are only the text similarity features. “Feature Selection” is the set of features found after running a forward feature selection algorithm. A “*” indicates statistical significance between the two models.",
"Table 4: Results for the model trained using the Google Scholar Related Articles ranking. We find that building a model using Google Scholar’s Related Articles ranking to predict our authors’ rankings performs poorly compared to the other models."
],
"file": [
"4-Figure1-1.png",
"9-Table1-1.png",
"10-Table2-1.png",
"11-Table3-1.png",
"11-Table4-1.png"
]
} |
1808.05077 | Exploiting Deep Learning for Persian Sentiment Analysis | The rise of social media is enabling people to freely express their opinions about products and services. The aim of sentiment analysis is to automatically determine subject's sentiment (e.g., positive, negative, or neutral) towards a particular aspect such as topic, product, movie, news etc. Deep learning has recently emerged as a powerful machine learning technique to tackle a growing demand of accurate sentiment analysis. However, limited work has been conducted to apply deep learning algorithms to languages other than English, such as Persian. In this work, two deep learning models (deep autoencoders and deep convolutional neural networks (CNNs)) are developed and applied to a novel Persian movie reviews dataset. The proposed deep learning models are analyzed and compared with the state-of-the-art shallow multilayer perceptron (MLP) based machine learning model. Simulation results demonstrate the enhanced performance of deep learning over state-of-the-art MLP. | {
"section_name": [
"Introduction",
"Related Works",
"Methodology and Experimental Results",
"Conclusion",
"Acknowledgment"
],
"paragraphs": [
[
"In recent years, social media, forums, blogs and other forms of online communication tools have radically affected everyday life, especially how people express their opinions and comments. The extraction of useful information (such as people's opinion about companies brand) from the huge amount of unstructured data is vital for most companies and organizations BIBREF0 . The product reviews are important for business owners as they can take business decision accordingly to automatically classify user’s opinions towards products and services. The application of sentiment analysis is not limited to product or movie reviews but can be applied to different fields such as news, politics, sport etc. For example, in online political debates, the sentiment analysis can be used to identify people's opinions on a certain election candidate or political parties BIBREF1 BIBREF2 BIBREF3 . In this context, sentiment analysis has been widely used in different languages by using traditional and advanced machine learning techniques. However, limited research has been conducted to develop models for the Persian language.",
"The sentiment analysis is a method to automatically process large amounts of data and classify text into positive or negative sentiments) BIBREF4 BIBREF5 . Sentiment analysis can be performed at two levels: at the document level or at the sentence level. At document level it is used to classify the sentiment expressed in the document (positive or negative), whereas, at sentence level is used to identify the sentiments expressed only in the sentence under analysis BIBREF6 BIBREF7 .",
"In the literature, deep learning based automated feature extraction has been shown to outperform state-of-the-art manual feature engineering based classifiers such as Support Vector Machine (SVM), Naive Bayes (NB) or Multilayer Perceptron (MLP) etc. One of the important techniques in deep learning is the autoencoder that generally involves reducing the number of feature dimensions under consideration. The aim of dimensionality reduction is to obtain a set of principal variables to improve the performance of the approach. Similarly, CNNs have been proven to be very effective in sentiment analysis. However, little work has been carried out to exploit deep learning based feature representation for Persian sentiment analysis BIBREF8 BIBREF9 . In this paper, we present two deep learning models (deep autoencoders and CNNs) for Persian sentiment analysis. The obtained deep learning results are compared with MLP.",
"The rest of the paper is organized as follows: Section 2 presents related work. Section 3 presents methodology and experimental results. Finally, section 4 concludes this paper."
],
[
"In the literature, extensive research has been carried out to model novel sentiment analysis models using both shallow and deep learning algorithms. For example, the authors in BIBREF10 proposed a novel deep learning approach for polarity detection in product reviews. The authors addressed two major limitations of stacked denoising of autoencoders, high computational cost and the lack of scalability of high dimensional features. Their experimental results showed the effectiveness of proposed autoencoders in achieving accuracy upto 87%. Zhai et al., BIBREF11 proposed a five layers autoencoder for learning the specific representation of textual data. The autoencoders are generalised using loss function and derived discriminative loss function from label information. The experimental results showed that the model outperformed bag of words, denoising autoencoders and other traditional methods, achieving accuracy rate up to 85% . Sun et al., BIBREF12 proposed a novel method to extract contextual information from text using a convolutional autoencoder architecture. The experimental results showed that the proposed model outperformed traditional SVM and Nave Bayes models, reporting accuracy of 83.1 %, 63.9% and 67.8% respectively.",
"Su et al., BIBREF13 proposed an approach for a neural generative autoencoder for learning bilingual word embedding. The experimental results showed the effectiveness of their approach on English-Chinese, English-German, English-French and English-Spanish (75.36% accuracy). Kim et al., BIBREF14 proposed a method to capture the non-linear structure of data using CNN classifier. The experimental results showed the effectiveness of the method on the multi-domain dataset (movie reviews and product reviews). However, the disadvantage is only SVM and Naive Bayes classifiers are used to evaluate the performance of the method and deep learning classifiers are not exploited. Zhang et al., BIBREF15 proposed an approach using deep learning classifiers to detect polarity in Japanese movie reviews. The approach used denoising autoencoder and adapted to other domains such as product reviews. The advantage of the approach is not depended on any language and could be used for various languages by applying different datasets. AP et al., BIBREF16 proposed a CNN based model for cross-language learning of vectorial word representations that is coherent between two languages. The method is evaluated using English and German movie reviews dataset. The experimental results showed CNN (83.45% accuracy) outperformed as compared to SVM (65.25% accuracy).",
"Zhou et al., BIBREF17 proposed an autoencoder architecture constituting an LSTM-encoder and decoder in order to capture features in the text and reduce dimensionality of data. The LSTM encoder used the interactive scheme to go through the sequence of sentences and LSTM decoder reconstructed the vector of sentences. The model is evaluated using different datasets such as book reviews, DVD reviews, and music reviews, acquiring accuracy up to 81.05%, 81.06%, and 79.40% respectively. Mesnil et al., BIBREF18 proposed an approach using ensemble classification to detect polarity in the movie reviews. The authors combined several machine learning algorithms such as SVM, Naive Bayes and RNN to achieve better results, where autoencoders were used to reduce the dimensionality of features. The experimental results showed the combination of unigram, bigram and trigram features (91.87% accuracy) outperformed unigram (91.56% accuracy) and bigram (88.61% accuracy).",
"Scheible et al., BIBREF19 trained an approach using semi-supervised recursive autoencoder to detect polarity in movie reviews dataset, consisted of 5000 positive and 5000 negative sentiments. The experimental results demonstrated that the proposed approach successfully detected polarity in movie reviews dataset (83.13% accuracy) and outperformed standard SVM (68.36% accuracy) model. Dai et al., BIBREF20 developed an autoencoder to detect polarity in the text using deep learning classifier. The LSTM was trained on IMDB movie reviews dataset. The experimental results showed the outperformance of their proposed approach over SVM. In table 1 some of the autoencoder approaches are depicted."
],
[
"The novel dataset used in this work was collected manually and includes Persian movie reviews from 2014 to 2016. A subset of dataset was used to train the neural network (60% training dataset) and rest of the data (40%) was used to test and validate the performance of the trained neural network (testing set (30%), validation set (10%)). There are two types of labels in the dataset: positive or negative. The reviews were manually annotated by three native Persian speakers aged between 30 and 50 years old.",
"After data collection, the corpus was pre-processed using tokenisation, normalisation and stemming techniques. The process of converting sentences into single word or token is called tokenisation. For example, \"The movie is great\" is changed to \"The\", \"movie\", \"is\", \"great\" BIBREF21 . There are some words which contain numbers. For example, \"great\" is written as \"gr8\" or \"gooood\" as written as \"good\" . The normalisation is used to convert these words into normal forms BIBREF22 . The process of converting words into their root is called stemming. For example, going was changed to go BIBREF23 . Words were converted into vectors. The fasttext was used to convert each word into 300-dimensions vectors. Fasttext is a library for text classification and representation BIBREF24 BIBREF25 BIBREF9 .",
"For classification, MLP, autoencoders and CNNs have been used. Fig. 1. depicts the modelled MLP architectures. MLP classifer was trained for 100 iterations BIBREF26 . Fig. 2. depicts the modelled autoencoder architecture. Autoencoder is a feed-forward deep neural network with unsupervised learning and it is used for dimensionality reduction. The autoencoder consists of input, output and hidden layers. Autoencoder is used to compress the input into a latent-space and then the output is reconstructed BIBREF27 BIBREF28 BIBREF29 . The exploited autoencoder model is depcited in Fig. 1. The autoencoder consists of one input layer three hidden layers (1500, 512, 1500) and an output layer. Convolutional Neural Networks contains three layers (input, hidden and output layer). The hidden layer consists of convolutional layers, pooling layers, fully connected layers and normalisation layer. The INLINEFORM0 is denotes the hidden neurons of j, with bias of INLINEFORM1 , is a weight sum over continuous visible nodes v which is given by: DISPLAYFORM0 ",
"The modelled CNN architecture is depicted in Fig. 3 BIBREF29 BIBREF28 . For CNN modelling, each utterance was represented as a concatenation vector of constituent words. The network has total 11 layers: 4 convolution layers, 4 max pooling and 3 fully connected layers. Convolution layers have filters of size 2 and with 15 feature maps. Each convolution layer is followed by a max polling layer with window size 2. The last max pooling layer is followed by fully connected layers of size 5000, 500 and 4. For final layer, softmax activation is used.",
"To evaluate the performance of the proposed approach, precision (1), recall (2), f-Measure (3), and prediction accuracy (4) have been used as a performance matrices. The experimental results are shown in Table 1, where it can be seen that autoencoders outperformed MLP and CNN outperformed autoencoders with the highest achieved accuracy of 82.6%. DISPLAYFORM0 DISPLAYFORM1 ",
"where TP is denotes true positive, TN is true negative, FP is false positive, and FN is false negative."
],
[
"Sentiment analysis has been used extensively for a wide of range of real-world applications, ranging from product reviews, surveys feedback, to business intelligence, and operational improvements. However, the majority of research efforts are devoted to English-language only, where information of great importance is also available in other languages. In this work, we focus on developing sentiment analysis models for Persian language, specifically for Persian movie reviews. Two deep learning models (deep autoencoders and deep CNNs) are developed and compared with the the state-of-the-art shallow MLP based machine learning model. Simulations results revealed the outperformance of our proposed CNN model over autoencoders and MLP. In future, we intend to exploit more advanced deep learning models such as Long Short-Term Memory (LSTM) and LSTM-CNNs to further evaluate the performance of our developed novel Persian dataset."
],
[
"Amir Hussain and Ahsan Adeel were supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant No.EP/M026981/1.",
""
]
]
} | {
"question": [
"Which deep learning model performed better?",
"By how much did the results improve?",
"What was their performance on the dataset?",
"How large is the dataset?"
],
"question_id": [
"1951cde612751410355610074c3c69cec94824c2",
"4140d8b5a78aea985546aa1e323de12f63d24add",
"61272b1d0338ed7708cf9ed9c63060a6a53e97a2",
"53b02095ba7625d85721692fce578654f66bbdf0"
],
"nlp_background": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"autoencoders"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"To evaluate the performance of the proposed approach, precision (1), recall (2), f-Measure (3), and prediction accuracy (4) have been used as a performance matrices. The experimental results are shown in Table 1, where it can be seen that autoencoders outperformed MLP and CNN outperformed autoencoders with the highest achieved accuracy of 82.6%. DISPLAYFORM0 DISPLAYFORM1"
],
"highlighted_evidence": [
"The experimental results are shown in Table 1, where it can be seen that autoencoders outperformed MLP and CNN outperformed autoencoders with the highest achieved accuracy of 82.6%."
]
},
{
"unanswerable": false,
"extractive_spans": [
"CNN"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In the literature, deep learning based automated feature extraction has been shown to outperform state-of-the-art manual feature engineering based classifiers such as Support Vector Machine (SVM), Naive Bayes (NB) or Multilayer Perceptron (MLP) etc. One of the important techniques in deep learning is the autoencoder that generally involves reducing the number of feature dimensions under consideration. The aim of dimensionality reduction is to obtain a set of principal variables to improve the performance of the approach. Similarly, CNNs have been proven to be very effective in sentiment analysis. However, little work has been carried out to exploit deep learning based feature representation for Persian sentiment analysis BIBREF8 BIBREF9 . In this paper, we present two deep learning models (deep autoencoders and CNNs) for Persian sentiment analysis. The obtained deep learning results are compared with MLP.",
"To evaluate the performance of the proposed approach, precision (1), recall (2), f-Measure (3), and prediction accuracy (4) have been used as a performance matrices. The experimental results are shown in Table 1, where it can be seen that autoencoders outperformed MLP and CNN outperformed autoencoders with the highest achieved accuracy of 82.6%. DISPLAYFORM0 DISPLAYFORM1"
],
"highlighted_evidence": [
"In this paper, we present two deep learning models (deep autoencoders and CNNs) for Persian sentiment analysis. ",
"To evaluate the performance of the proposed approach, precision (1), recall (2), f-Measure (3), and prediction accuracy (4) have been used as a performance matrices.",
"The experimental results are shown in Table 1, where it can be seen that autoencoders outperformed MLP and CNN outperformed autoencoders with the highest achieved accuracy of 82.6%.",
"The experimental results are shown in Table 1, where it can be seen that autoencoders outperformed MLP and CNN outperformed autoencoders with the highest achieved accuracy of 82.6%."
]
}
],
"annotation_id": [
"8801040685e36356914dba2b49d75f621694ac1f",
"c0e3ddbc9fc3cd2e27c50cdafbb01582b2f40401"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"eaf28aa624b9ef872f98664100388cc79f476be8"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"accuracy of 82.6%"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"To evaluate the performance of the proposed approach, precision (1), recall (2), f-Measure (3), and prediction accuracy (4) have been used as a performance matrices. The experimental results are shown in Table 1, where it can be seen that autoencoders outperformed MLP and CNN outperformed autoencoders with the highest achieved accuracy of 82.6%. DISPLAYFORM0 DISPLAYFORM1"
],
"highlighted_evidence": [
"The experimental results are shown in Table 1, where it can be seen that autoencoders outperformed MLP and CNN outperformed autoencoders with the highest achieved accuracy of 82.6%."
]
}
],
"annotation_id": [
"656e25de142297da158afd6f0232c07571598d31"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"a192040636f61430877dab837726c57e2bbe7077"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Fig. 1. Multilayer Perceptron",
"Fig. 2. Autoencoder",
"Fig. 3. Deep Convolutional Neural Network",
"Table 1. Results: MLP vs. Autoencoder vs. Convolutional Neural Network"
],
"file": [
"5-Figure1-1.png",
"5-Figure2-1.png",
"6-Figure3-1.png",
"7-Table1-1.png"
]
} |
1807.03367 | Talk the Walk: Navigating New York City through Grounded Dialogue | We introduce"Talk The Walk", the first large-scale dialogue dataset grounded in action and perception. The task involves two agents (a"guide"and a"tourist") that communicate via natural language in order to achieve a common goal: having the tourist navigate to a given target location. The task and dataset, which are described in detail, are challenging and their full solution is an open problem that we pose to the community. We (i) focus on the task of tourist localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding tourist utterances into the guide's map, (ii) show it yields significant improvements for both emergent and natural language communication, and (iii) using this method, we establish non-trivial baselines on the full task. | {
"section_name": [
null,
"Introduction",
"Talk The Walk",
"Task",
"Data Collection",
"Dataset Statistics",
"Experiments",
"Tourist Localization",
"Model",
"The Tourist",
"The Guide",
"Comparisons",
"Results and Discussion",
"Analysis of Localization Task",
"Emergent Language Localization",
"Natural Language Localization",
"Localization-based Baseline",
"Conclusion",
"Related Work",
"Implementation Details",
"Additional Natural Language Experiments",
"Tourist Generation Models",
"Localization from Human Utterances",
"Visualizing MASC predictions",
"Evaluation on Full Setup",
"Landmark Classification",
"Dataset Details"
],
"paragraphs": [
[
"0pt0.03.03 *",
"0pt0.030.03 *",
"0pt0.030.03",
"We introduce “Talk The Walk”, the first large-scale dialogue dataset grounded in action and perception. The task involves two agents (a “guide” and a “tourist”) that communicate via natural language in order to achieve a common goal: having the tourist navigate to a given target location. The task and dataset, which are described in detail, are challenging and their full solution is an open problem that we pose to the community. We (i) focus on the task of tourist localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding tourist utterances into the guide's map, (ii) show it yields significant improvements for both emergent and natural language communication, and (iii) using this method, we establish non-trivial baselines on the full task."
],
[
"As artificial intelligence plays an ever more prominent role in everyday human lives, it becomes increasingly important to enable machines to communicate via natural language—not only with humans, but also with each other. Learning algorithms for natural language understanding, such as in machine translation and reading comprehension, have progressed at an unprecedented rate in recent years, but still rely on static, large-scale, text-only datasets that lack crucial aspects of how humans understand and produce natural language. Namely, humans develop language capabilities by being embodied in an environment which they can perceive, manipulate and move around in; and by interacting with other humans. Hence, we argue that we should incorporate all three fundamental aspects of human language acquisition—perception, action and interactive communication—and develop a task and dataset to that effect.",
"We introduce the Talk the Walk dataset, where the aim is for two agents, a “guide” and a “tourist”, to interact with each other via natural language in order to achieve a common goal: having the tourist navigate towards the correct location. The guide has access to a map and knows the target location, but does not know where the tourist is; the tourist has a 360-degree view of the world, but knows neither the target location on the map nor the way to it. The agents need to work together through communication in order to successfully solve the task. An example of the task is given in Figure FIGREF3 .",
"Grounded language learning has (re-)gained traction in the AI community, and much attention is currently devoted to virtual embodiment—the development of multi-agent communication tasks in virtual environments—which has been argued to be a viable strategy for acquiring natural language semantics BIBREF0 . Various related tasks have recently been introduced, but in each case with some limitations. Although visually grounded dialogue tasks BIBREF1 , BIBREF2 comprise perceptual grounding and multi-agent interaction, their agents are passive observers and do not act in the environment. By contrast, instruction-following tasks, such as VNL BIBREF3 , involve action and perception but lack natural language interaction with other agents. Furthermore, some of these works use simulated environments BIBREF4 and/or templated language BIBREF5 , which arguably oversimplifies real perception or natural language, respectively. See Table TABREF15 for a comparison.",
"Talk The Walk is the first task to bring all three aspects together: perception for the tourist observing the world, action for the tourist to navigate through the environment, and interactive dialogue for the tourist and guide to work towards their common goal. To collect grounded dialogues, we constructed a virtual 2D grid environment by manually capturing 360-views of several neighborhoods in New York City (NYC). As the main focus of our task is on interactive dialogue, we limit the difficulty of the control problem by having the tourist navigating a 2D grid via discrete actions (turning left, turning right and moving forward). Our street view environment was integrated into ParlAI BIBREF6 and used to collect a large-scale dataset on Mechanical Turk involving human perception, action and communication.",
"We argue that for artificial agents to solve this challenging problem, some fundamental architecture designs are missing, and our hope is that this task motivates their innovation. To that end, we focus on the task of localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism. To model the interaction between language and action, this architecture repeatedly conditions the spatial dimensions of a convolution on the communicated message sequence.",
"This work makes the following contributions: 1) We present the first large scale dialogue dataset grounded in action and perception; 2) We introduce the MASC architecture for localization and show it yields improvements for both emergent and natural language; 4) Using localization models, we establish initial baselines on the full task; 5) We show that our best model exceeds human performance under the assumption of “perfect perception” and with a learned emergent communication protocol, and sets a non-trivial baseline with natural language."
],
[
"We create a perceptual environment by manually capturing several neighborhoods of New York City (NYC) with a 360 camera. Most parts of the city are grid-like and uniform, which makes it well-suited for obtaining a 2D grid. For Talk The Walk, we capture parts of Hell's Kitchen, East Village, the Financial District, Williamsburg and the Upper East Side—see Figure FIGREF66 in Appendix SECREF14 for their respective locations within NYC. For each neighborhood, we choose an approximately 5x5 grid and capture a 360 view on all four corners of each intersection, leading to a grid-size of roughly 10x10 per neighborhood.",
"The tourist's location is given as a tuple INLINEFORM0 , where INLINEFORM1 are the coordinates and INLINEFORM2 signifies the orientation (north, east, south or west). The tourist can take three actions: turn left, turn right and go forward. For moving forward, we add INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 to the INLINEFORM7 coordinates for the respective orientations. Upon a turning action, the orientation is updated by INLINEFORM8 where INLINEFORM9 for left and INLINEFORM10 for right. If the tourist moves outside the grid, we issue a warning that they cannot go in that direction and do not update the location. Moreover, tourists are shown different types of transitions: a short transition for actions that bring the tourist to a different corner of the same intersection; and a longer transition for actions that bring them to a new intersection.",
"The guide observes a map that corresponds to the tourist's environment. We exploit the fact that urban areas like NYC are full of local businesses, and overlay the map with these landmarks as localization points for our task. Specifically, we manually annotate each corner of the intersection with a set of landmarks INLINEFORM0 , each coming from one of the following categories:",
" Bar Playfield Bank Hotel Shop Subway Coffee Shop Restaurant Theater ",
"The right-side of Figure FIGREF3 illustrates how the map is presented. Note that within-intersection transitions have a smaller grid distance than transitions to new intersections. To ensure that the localization task is not too easy, we do not include street names in the overhead map and keep the landmark categories coarse. That is, the dialogue is driven by uncertainty in the tourist's current location and the properties of the target location: if the exact location and orientation of the tourist were known, it would suffice to communicate a sequence of actions."
],
[
"For the Talk The Walk task, we randomly choose one of the five neighborhoods, and subsample a 4x4 grid (one block with four complete intersections) from the entire grid. We specify the boundaries of the grid by the top-left and bottom-right corners INLINEFORM0 . Next, we construct the overhead map of the environment, i.e. INLINEFORM1 with INLINEFORM2 and INLINEFORM3 . We subsequently sample a start location and orientation INLINEFORM4 and a target location INLINEFORM5 at random.",
"The shared goal of the two agents is to navigate the tourist to the target location INLINEFORM0 , which is only known to the guide. The tourist perceives a “street view” planar projection INLINEFORM1 of the 360 image at location INLINEFORM2 and can simultaneously chat with the guide and navigate through the environment. The guide's role consists of reading the tourist description of the environment, building a “mental map” of their current position and providing instructions for navigating towards the target location. Whenever the guide believes that the tourist has reached the target location, they instruct the system to evaluate the tourist's location. The task ends when the evaluation is successful—i.e., when INLINEFORM3 —or otherwise continues until a total of three failed attempts. The additional attempts are meant to ease the task for humans, as we found that they otherwise often fail at the task but still end up close to the target location, e.g., at the wrong corner of the correct intersection."
],
[
"We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs."
],
[
"The Talk The Walk dataset consists of over 10k successful dialogues—see Table FIGREF66 in the appendix for the dataset statistics split by neighborhood. Turkers successfully completed INLINEFORM0 of all finished tasks (we use this statistic as the human success rate). More than six hundred participants successfully completed at least one Talk The Walk HIT. Although the Visual Dialog BIBREF2 and GuessWhat BIBREF1 datasets are larger, the collected Talk The Walk dialogs are significantly longer. On average, Turkers needed more than 62 acts (i.e utterances and actions) before they successfully completed the task, whereas Visual Dialog requires 20 acts. The majority of acts comprise the tourist's actions, with on average more than 44 actions per dialogue. The guide produces roughly 9 utterances per dialogue, slightly more than the tourist's 8 utterances. Turkers use diverse discourse, with a vocabulary size of more than 10K (calculated over all successful dialogues). An example from the dataset is shown in Appendix SECREF14 . The dataset is available at https://github.com/facebookresearch/talkthewalk."
],
[
"We investigate the difficulty of the proposed task by establishing initial baselines. The final Talk The Walk task is challenging and encompasses several important sub-tasks, ranging from landmark recognition to tourist localization and natural language instruction-giving. Arguably the most important sub-task is localization: without such capabilities the guide can not tell whether the tourist reached the target location. In this work, we establish a minimal baseline for Talk The Walk by utilizing agents trained for localization. Specifically, we let trained tourist models undertake random walks, using the following protocol: at each step, the tourist communicates its observations and actions to the guide, who predicts the tourist's location. If the guide predicts that the tourist is at target, we evaluate its location. If successful, the task ends, otherwise we continue until there have been three wrong evaluations. The protocol is given as pseudo-code in Appendix SECREF12 ."
],
[
"The designed navigation protocol relies on a trained localization model that predicts the tourist's location from a communicated message. Before we formalize this localization sub-task in Section UID21 , we further introduce two simplifying assumptions—perfect perception and orientation-agnosticism—so as to overcome some of the difficulties we encountered in preliminary experiments.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Perfect Perception Early experiments revealed that perceptual grounding of landmarks is difficult: we set up a landmark classification problem, on which models with extracted CNN BIBREF7 or text recognition features BIBREF8 barely outperform a random baseline—see Appendix SECREF13 for full details. This finding implies that localization models from image input are limited by their ability to recognize landmarks, and, as a result, would not generalize to unseen environments. To ensure that perception is not the limiting factor when investigating the landmark-grounding and action-grounding capabilities of localization models, we assume “perfect perception”: in lieu of the 360 image view, the tourist is given the landmarks at its current location. More formally, each state observation INLINEFORM0 now equals the set of landmarks at the INLINEFORM1 -location, i.e. INLINEFORM2 . If the INLINEFORM3 -location does not have any visible landmarks, we return a single “empty corner” symbol. We stress that our findings—including a novel architecture for grounding actions into an overhead map, see Section UID28 —should carry over to settings without the perfect perception assumption.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Orientation-agnosticism We opt to ignore the tourist's orientation, which simplifies the set of actions to [Left, Right, Up, Down], corresponding to adding [(-1, 0), (1, 0), (0, 1), (0, -1)] to the current INLINEFORM0 coordinates, respectively. Note that actions are now coupled to an orientation on the map—e.g. up is equal to going north—and this implicitly assumes that the tourist has access to a compass. This also affects perception, since the tourist now has access to views from all orientations: in conjunction with “perfect perception”, implying that only landmarks at the current corner are given, whereas landmarks from different corners (e.g. across the street) are not visible.",
"Even with these simplifications, the localization-based baseline comes with its own set of challenges. As we show in Section SECREF34 , the task requires communication about a short (random) path—i.e., not only a sequence of observations but also actions—in order to achieve high localization accuracy. This means that the guide needs to decode observations from multiple time steps, as well as understand their 2D spatial arrangement as communicated via the sequence of actions. Thus, in order to get to a good understanding of the task, we thoroughly examine whether the agents can learn a communication protocol that simultaneously grounds observations and actions into the guide's map. In doing so, we thoroughly study the role of the communication channel in the localization task, by investigating increasingly constrained forms of communication: from differentiable continuous vectors to emergent discrete symbols to the full complexity of natural language.",
"The full navigation baseline hinges on a localization model from random trajectories. While we can sample random actions in the emergent communication setup, this is not possible for the natural language setup because the messages are coupled to the trajectories of the human annotators. This leads to slightly different problem setups, as described below.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Emergent language A tourist, starting from a random location, takes INLINEFORM0 random actions INLINEFORM1 to reach target location INLINEFORM2 . Every location in the environment has a corresponding set of landmarks INLINEFORM3 for each of the INLINEFORM4 coordinates. As the tourist navigates, the agent perceives INLINEFORM5 state-observations INLINEFORM6 where each observation INLINEFORM7 consists of a set of INLINEFORM8 landmark symbols INLINEFORM9 . Given the observations INLINEFORM10 and actions INLINEFORM11 , the tourist generates a message INLINEFORM12 which is communicated to the other agent. The objective of the guide is to predict the location INLINEFORM13 from the tourist's message INLINEFORM14 .",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural language In contrast to our emergent communication experiments, we do not take random actions but instead extract actions, observations, and messages from the dataset. Specifically, we consider each tourist utterance (i.e. at any point in the dialogue), obtain the current tourist location as target location INLINEFORM0 , the utterance itself as message INLINEFORM1 , and the sequence of observations and actions that took place between the current and previous tourist utterance as INLINEFORM2 and INLINEFORM3 , respectively. Similar to the emergent language setting, the guide's objective is to predict the target location INLINEFORM4 models from the tourist message INLINEFORM5 . We conduct experiments with INLINEFORM6 taken from the dataset and with INLINEFORM7 generated from the extracted observations INLINEFORM8 and actions INLINEFORM9 ."
],
[
"This section outlines the tourist and guide architectures. We first describe how the tourist produces messages for the various communication channels across which the messages are sent. We subsequently describe how these messages are processed by the guide, and introduce the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding into the 2D overhead map in order to predict the tourist's location."
],
[
"For each of the communication channels, we outline the procedure for generating a message INLINEFORM0 . Given a set of state observations INLINEFORM1 , we represent each observation by summing the INLINEFORM2 -dimensional embeddings of the observed landmarks, i.e. for INLINEFORM3 , INLINEFORM4 , where INLINEFORM5 is the landmark embedding lookup table. In addition, we embed action INLINEFORM6 into a INLINEFORM7 -dimensional embedding INLINEFORM8 via a look-up table INLINEFORM9 . We experiment with three types of communication channel.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous vectors The tourist has access to observations of several time steps, whose order is important for accurate localization. Because summing embeddings is order-invariant, we introduce a sum over positionally-gated embeddings, which, conditioned on time step INLINEFORM0 , pushes embedding information into the appropriate dimensions. More specifically, we generate an observation message INLINEFORM1 , where INLINEFORM2 is a learned gating vector for time step INLINEFORM3 . In a similar fashion, we produce action message INLINEFORM4 and send the concatenated vectors INLINEFORM5 as message to the guide. We can interpret continuous vector communication as a single, monolithic model because its architecture is end-to-end differentiable, enabling gradient-based optimization for training.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Discrete symbols Like the continuous vector communication model, with discrete communication the tourist also uses separate channels for observations and actions, as well as a sum over positionally gated embeddings to generate observation embedding INLINEFORM0 . We pass this embedding through a sigmoid and generate a message INLINEFORM1 by sampling from the resulting Bernoulli distributions:",
" INLINEFORM0 ",
"The action message INLINEFORM0 is produced in the same way, and we obtain the final tourist message INLINEFORM1 through concatenating the messages.",
"The communication channel's sampling operation yields the model non-differentiable, so we use policy gradients BIBREF9 , BIBREF10 to train the parameters INLINEFORM0 of the tourist model. That is, we estimate the gradient by INLINEFORM1 ",
" where the reward function INLINEFORM0 is the negative guide's loss (see Section SECREF25 ) and INLINEFORM1 a state-value baseline to reduce variance. We use a linear transformation over the concatenated embeddings as baseline prediction, i.e. INLINEFORM2 , and train it with a mean squared error loss.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural Language Because observations and actions are of variable-length, we use an LSTM encoder over the sequence of observations embeddings INLINEFORM0 , and extract its last hidden state INLINEFORM1 . We use a separate LSTM encoder for action embeddings INLINEFORM2 , and concatenate both INLINEFORM3 and INLINEFORM4 to the input of the LSTM decoder at each time step: DISPLAYFORM0 ",
" where INLINEFORM0 a look-up table, taking input tokens INLINEFORM1 . We train with teacher-forcing, i.e. we optimize the cross-entropy loss: INLINEFORM2 . At test time, we explore the following decoding strategies: greedy, sampling and a beam-search. We also fine-tune a trained tourist model (starting from a pre-trained model) with policy gradients in order to minimize the guide's prediction loss."
],
[
"Given a tourist message INLINEFORM0 describing their observations and actions, the objective of the guide is to predict the tourist's location on the map. First, we outline the procedure for extracting observation embedding INLINEFORM1 and action embeddings INLINEFORM2 from the message INLINEFORM3 for each of the types of communication. Next, we discuss the MASC mechanism that takes the observations and actions in order to ground them on the guide's map in order to predict the tourist's location.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous For the continuous communication model, we assign the observation message to the observation embedding, i.e. INLINEFORM0 . To extract the action embedding for time step INLINEFORM1 , we apply a linear layer to the action message, i.e. INLINEFORM2 .",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Discrete For discrete communication, we obtain observation INLINEFORM0 by applying a linear layer to the observation message, i.e. INLINEFORM1 . Similar to the continuous communication model, we use a linear layer over action message INLINEFORM2 to obtain action embedding INLINEFORM3 for time step INLINEFORM4 .",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural Language The message INLINEFORM0 contains information about observations and actions, so we use a recurrent neural network with attention mechanism to extract the relevant observation and action embeddings. Specifically, we encode the message INLINEFORM1 , consisting of INLINEFORM2 tokens INLINEFORM3 taken from vocabulary INLINEFORM4 , with a bidirectional LSTM: DISPLAYFORM0 ",
" where INLINEFORM0 is the word embedding look-up table. We obtain observation embedding INLINEFORM1 through an attention mechanism over the hidden states INLINEFORM2 : DISPLAYFORM0 ",
"where INLINEFORM0 is a learned control embedding who is updated through a linear transformation of the previous control and observation embedding: INLINEFORM1 . We use the same mechanism to extract the action embedding INLINEFORM2 from the hidden states. For the observation embedding, we obtain the final representation by summing positionally gated embeddings, i.e., INLINEFORM3 .",
"We represent the guide's map as INLINEFORM0 , where in this case INLINEFORM1 , where each INLINEFORM2 -dimensional INLINEFORM3 location embedding INLINEFORM4 is computed as the sum of the guide's landmark embeddings for that location.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Motivation While the guide's map representation contains only local landmark information, the tourist communicates a trajectory of the map (i.e. actions and observations from multiple locations), implying that directly comparing the tourist's message with the individual landmark embeddings is probably suboptimal. Instead, we want to aggregate landmark information from surrounding locations by imputing trajectories over the map to predict locations. We propose a mechanism for translating landmark embeddings according to state transitions (left, right, up, down), which can be expressed as a 2D convolution over the map embeddings. For simplicity, let us assume that the map embedding INLINEFORM0 is 1-dimensional, then a left action can be realized through application of the following INLINEFORM1 kernel: INLINEFORM2 which effectively shifts all values of INLINEFORM3 one position to the left. We propose to learn such state-transitions from the tourist message through a differentiable attention-mask over the spatial dimensions of a 3x3 convolution.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em MASC We linearly project each predicted action embedding INLINEFORM0 to a 9-dimensional vector INLINEFORM1 , normalize it by a softmax and subsequently reshape the vector into a 3x3 mask INLINEFORM2 : DISPLAYFORM0 ",
" We learn a 3x3 convolutional kernel INLINEFORM0 , with INLINEFORM1 features, and apply the mask INLINEFORM2 to the spatial dimensions of the convolution by first broadcasting its values along the feature dimensions, i.e. INLINEFORM3 , and subsequently taking the Hadamard product: INLINEFORM4 . For each action step INLINEFORM5 , we then apply a 2D convolution with masked weight INLINEFORM6 to obtain a new map embedding INLINEFORM7 , where we zero-pad the input to maintain identical spatial dimensions.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Prediction model We repeat the MASC operation INLINEFORM0 times (i.e. once for each action), and then aggregate the map embeddings by a sum over positionally-gated embeddings: INLINEFORM1 . We score locations by taking the dot-product of the observation embedding INLINEFORM2 , which contains information about the sequence of observed landmarks by the tourist, and the map. We compute a distribution over the locations of the map INLINEFORM3 by taking a softmax over the computed scores: DISPLAYFORM0 ",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Predicting T While emergent communication models use a fixed length trasjectory INLINEFORM0 , natural language messages may differ in the number of communicated observations and actions. Hence, we predict INLINEFORM1 from the communicated message. Specifically, we use a softmax regression layer over the last hidden state INLINEFORM2 of the RNN, and subsequently sample INLINEFORM3 from the resulting multinomial distribution: DISPLAYFORM0 ",
"We jointly train the INLINEFORM0 -prediction model via REINFORCE, with the guide's loss as reward function and a mean-reward baseline."
],
[
"To better analyze the performance of the models incorporating MASC, we compare against a no-MASC baseline in our experiments, as well as a prediction upper bound.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em No MASC We compare the proposed MASC model with a model that does not include this mechanism. Whereas MASC predicts a convolution mask from the tourist message, the “No MASC” model uses INLINEFORM0 , the ordinary convolutional kernel to convolve the map embedding INLINEFORM1 to obtain INLINEFORM2 . We also share the weights of this convolution at each time step.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Prediction upper-bound Because we have access to the class-conditional likelihood INLINEFORM0 , we are able to compute the Bayes error rate (or irreducible error). No model (no matter how expressive) with any amount of data can ever obtain better localization accuracy as there are multiple locations consistent with the observations and actions."
],
[
"In this section, we describe the findings of various experiments. First, we analyze how much information needs to be communicated for accurate localization in the Talk The Walk environment, and find that a short random path (including actions) is necessary. Next, for emergent language, we show that the MASC architecture can achieve very high localization accuracy, significantly outperforming the baseline that does not include this mechanism. We then turn our attention to the natural language experiments, and find that localization from human utterances is much harder, reaching an accuracy level that is below communicating a single landmark observation. We show that generated utterances from a conditional language model leads to significantly better localization performance, by successfully grounding the utterance on a single landmark observation (but not yet on multiple observations and actions). Finally, we show performance of the localization baseline on the full task, which can be used for future comparisons to this work."
],
[
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Task is not too easy The upper-bound on localization performance in Table TABREF32 suggest that communicating a single landmark observation is not sufficient for accurate localization of the tourist ( INLINEFORM0 35% accuracy). This is an important result for the full navigation task because the need for two-way communication disappears if localization is too easy; if the guide knows the exact location of the tourist it suffices to communicate a list of instructions, which is then executed by the tourist. The uncertainty in the tourist's location is what drives the dialogue between the two agents.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Importance of actions We observe that the upperbound for only communicating observations plateaus around 57% (even for INLINEFORM0 actions), whereas it exceeds 90% when we also take actions into account. This implies that, at least for random walks, it is essential to communicate a trajectory, including observations and actions, in order to achieve high localization accuracy."
],
[
"We first report the results for tourist localization with emergent language in Table TABREF32 .",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em MASC improves performance The MASC architecture significantly improves performance compared to models that do not include this mechanism. For instance, for INLINEFORM0 action, MASC already achieves 56.09 % on the test set and this further increases to 69.85% for INLINEFORM1 . On the other hand, no-MASC models hit a plateau at 43%. In Appendix SECREF11 , we analyze learned MASC values, and show that communicated actions are often mapped to corresponding state-transitions.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous vs discrete We observe similar performance for continuous and discrete emergent communication models, implying that a discrete communication channel is not a limiting factor for localization performance."
],
[
"We report the results of tourist localization with natural language in Table TABREF36 . We compare accuracy of the guide model (with MASC) trained on utterances from (i) humans, (ii) a supervised model with various decoding strategies, and (iii) a policy gradient model optimized with respect to the loss of a frozen, pre-trained guide model on human utterances.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Human utterances Compared to emergent language, localization from human utterances is much harder, achieving only INLINEFORM0 on the test set. Here, we report localization from a single utterance, but in Appendix SECREF45 we show that including up to five dialogue utterances only improves performance to INLINEFORM1 . We also show that MASC outperform no-MASC models for natural language communication.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Generated utterances We also investigate generated tourist utterances from conditional language models. Interestingly, we observe that the supervised model (with greedy and beam-search decoding) as well as the policy gradient model leads to an improvement of more than 10 accuracy points over the human utterances. However, their level of accuracy is slightly below the baseline of communicating a single observation, indicating that these models only learn to ground utterances in a single landmark observation.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Better grounding of generated utterances We analyze natural language samples in Table TABREF38 , and confirm that, unlike human utterances, the generated utterances are talking about the observed landmarks. This observation explains why the generated utterances obtain higher localization accuracy. The current language models are most successful when conditioned on a single landmark observation; We show in Appendix UID43 that performance quickly deteriorates when the model is conditioned on more observations, suggesting that it can not produce natural language utterances about multiple time steps."
],
[
"Table TABREF36 shows results for the best localization models on the full task, evaluated via the random walk protocol defined in Algorithm SECREF12 .",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Comparison with human annotators Interestingly, our best localization model (continuous communication, with MASC, and INLINEFORM0 ) achieves 88.33% on the test set and thus exceed human performance of 76.74% on the full task. While emergent models appear to be stronger localizers, humans might cope with their localization uncertainty through other mechanisms (e.g. better guidance, bias towards taking particular paths, etc). The simplifying assumption of perfect perception also helps.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Number of actions Unsurprisingly, humans take fewer steps (roughly 15) than our best random walk model (roughly 34). Our human annotators likely used some form of guidance to navigate faster to the target."
],
[
"We introduced the Talk The Walk task and dataset, which consists of crowd-sourced dialogues in which two human annotators collaborate to navigate to target locations in the virtual streets of NYC. For the important localization sub-task, we proposed MASC—a novel grounding mechanism to learn state-transition from the tourist's message—and showed that it improves localization performance for emergent and natural language. We use the localization model to provide baseline numbers on the Talk The Walk task, in order to facilitate future research."
],
[
"The Talk the Walk task and dataset facilitate future research on various important subfields of artificial intelligence, including grounded language learning, goal-oriented dialogue research and situated navigation. Here, we describe related previous work in these areas.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Related tasks There has been a long line of work involving related tasks. Early work on task-oriented dialogue dates back to the early 90s with the introduction of the Map Task BIBREF11 and Maze Game BIBREF25 corpora. Recent efforts have led to larger-scale goal-oriented dialogue datasets, for instance to aid research on visually-grounded dialogue BIBREF2 , BIBREF1 , knowledge-base-grounded discourse BIBREF29 or negotiation tasks BIBREF36 . At the same time, there has been a big push to develop environments for embodied AI, many of which involve agents following natural language instructions with respect to an environment BIBREF13 , BIBREF50 , BIBREF5 , BIBREF39 , BIBREF19 , BIBREF18 , following-up on early work in this area BIBREF38 , BIBREF20 . An early example of navigation using neural networks is BIBREF28 , who propose an online learning approach for robot navigation. Recently, there has been increased interest in using end-to-end trainable neural networks for learning to navigate indoor scenes BIBREF27 , BIBREF26 or large cities BIBREF17 , BIBREF40 , but, unlike our work, without multi-agent communication. Also the task of localization (without multi-agent communication) has recently been studied BIBREF18 , BIBREF48 .",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Grounded language learning Grounded language learning is motivated by the observation that humans learn language embodied (grounded) in sensorimotor experience of the physical world BIBREF15 , BIBREF45 . On the one hand, work in multi-modal semantics has shown that grounding can lead to practical improvements on various natural language understanding tasks BIBREF14 , BIBREF31 . In robotics, researchers dissatisfied with purely symbolic accounts of meaning attempted to build robotic systems with the aim of grounding meaning in physical experience of the world BIBREF44 , BIBREF46 . Recently, grounding has also been applied to the learning of sentence representations BIBREF32 , image captioning BIBREF37 , BIBREF49 , visual question answering BIBREF12 , BIBREF22 , visual reasoning BIBREF30 , BIBREF42 , and grounded machine translation BIBREF43 , BIBREF23 . Grounding also plays a crucial role in the emergent research of multi-agent communication, where, agents communicate (in natural language or otherwise) in order to solve a task, with respect to their shared environment BIBREF35 , BIBREF21 , BIBREF41 , BIBREF24 , BIBREF36 , BIBREF47 , BIBREF34 ."
],
[
"For the emergent communication models, we use an embedding size INLINEFORM0 . The natural language experiments use 128-dimensional word embeddings and a bidirectional RNN with 256 units. In all experiments, we train the guide with a cross entropy loss using the ADAM optimizer with default hyper-parameters BIBREF33 . We perform early stopping on the validation accuracy, and report the corresponding train, valid and test accuracy. We optimize the localization models with continuous, discrete and natural language communication channels for 200, 200, and 25 epochs, respectively. To facilitate further research on Talk The Walk, we make our code base for reproducing experiments publicly available at https://github.com/facebookresearch/talkthewalk."
],
[
"First, we investigate the sensitivity of tourist generation models to the trajectory length, finding that the model conditioned on a single observation (i.e. INLINEFORM0 ) achieves best performance. In the next subsection, we further analyze localization models from human utterances by investigating MASC and no-MASC models with increasing dialogue context."
],
[
"After training the supervised tourist model (conditioned on observations and action from human expert trajectories), there are two ways to train an accompanying guide model. We can optimize a location prediction model on either (i) extracted human trajectories (as in the localization setup from human utterances) or (ii) on all random paths of length INLINEFORM0 (as in the full task evaluation). Here, we investigate the impact of (1) using either human or random trajectories for training the guide model, and (2) the effect of varying the path length INLINEFORM1 during the full-task evaluation. For random trajectories, guide training uses the same path length INLINEFORM2 as is used during evaluation. We use a pre-trained tourist model with greedy decoding for generating the tourist utterances. Table TABREF40 summarizes the results.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Human vs random trajectories We only observe small improvements for training on random trajectories. Human trajectories are thus diverse enough to generalize to random trajectories.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Effect of path length There is a strong negative correlation between task success and the conditioned trajectory length. We observe that the full task performance quickly deteriorates for both human and random trajectories. This suggests that the tourist generation model can not produce natural language utterances that describe multiple observations and actions. Although it is possible that the guide model can not process such utterances, this is not very likely because the MASC architectures handles such messages successfully for emergent communication.",
"We report localization performance of tourist utterances generated by beam search decoding of varying beam size in Table TABREF40 . We find that performance decreases from 29.05% to 20.87% accuracy on the test set when we increase the beam-size from one to eight."
],
[
"We conduct an ablation study for MASC on natural language with varying dialogue context. Specifically, we compare localization accuracy of MASC and no-MASC models trained on the last [1, 3, 5] utterances of the dialogue (including guide utterances). We report these results in Table TABREF41 . In all cases, MASC outperforms the no-MASC models by several accuracy points. We also observe that mean predicted INLINEFORM0 (over the test set) increases from 1 to 2 when more dialogue context is included."
],
[
"Figure FIGREF46 shows the MASC values for a learned model with emergent discrete communications and INLINEFORM0 actions. Specifically, we look at the predicted MASC values for different action sequences taken by the tourist. We observe that the first action is always mapped to the correct state-transition, but that the second and third MASC values do not always correspond to right state-transitions."
],
[
"We provide pseudo-code for evaluation of localization models on the full task in Algorithm SECREF12 , as well as results for all emergent communication models in Table TABREF55 .",
" INLINEFORM0 INLINEFORM1 ",
" INLINEFORM0 take new action INLINEFORM1 INLINEFORM2 ",
"Performance evaluation of location prediction model on full Talk The Walk setup"
],
[
"While the guide has access to the landmark labels, the tourist needs to recognize these landmarks from raw perceptual information. In this section, we study landmark classification as a supervised learning problem to investigate the difficulty of perceptual grounding in Talk The Walk.",
"The Talk The Walk dataset contains a total of 307 different landmarks divided among nine classes, see Figure FIGREF62 for how they are distributed. The class distribution is fairly imbalanced, with shops and restaurants as the most frequent landmarks and relatively few play fields and theaters. We treat landmark recognition as a multi-label classification problem as there can be multiple landmarks on a corner.",
"For the task of landmark classification, we extract the relevant views of the 360 image from which a landmark is visible. Because landmarks are labeled to be on a specific corner of an intersection, we assume that they are visible from one of the orientations facing away from the intersection. For example, for a landmark on the northwest corner of an intersection, we extract views from both the north and west direction. The orientation-specific views are obtained by a planar projection of the full 360-image with a small field of view (60 degrees) to limit distortions. To cover the full field of view, we extract two images per orientation, with their horizontal focus point 30 degrees apart. Hence, we obtain eight images per 360 image with corresponding orientation INLINEFORM0 .",
"We run the following pre-trained feature extractors over the extracted images:",
"For the text recognition model, we use a learned look-up table INLINEFORM0 to embed the extracted text features INLINEFORM1 , and fuse all embeddings of four images through a bag of embeddings, i.e., INLINEFORM2 . We use a linear layer followed by a sigmoid to predict the probability for each class, i.e. INLINEFORM3 . We also experiment with replacing the look-up embeddings with pre-trained FastText embeddings BIBREF16 . For the ResNet model, we use a bag of embeddings over the four ResNet features, i.e. INLINEFORM4 , before we pass it through a linear layer to predict the class probabilities: INLINEFORM5 . We also conduct experiments where we first apply PCA to the extracted ResNet and FastText features before we feed them to the model.",
"To account for class imbalance, we train all described models with a binary cross entropy loss weighted by the inverted class frequency. We create a 80-20 class-conditional split of the dataset into a training and validation set. We train for 100 epochs and perform early stopping on the validation loss.",
"The F1 scores for the described methods in Table TABREF65 . We compare to an “all positive” baseline that always predicts that the landmark class is visible and observe that all presented models struggle to outperform this baseline. Although 256-dimensional ResNet features achieve slightly better precision on the validation set, it results in much worse recall and a lower F1 score. Our results indicate that perceptual grounding is a difficult task, which easily merits a paper of its own right, and so we leave further improvements (e.g. better text recognizers) for future work."
],
[
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Dataset split We split the full dataset by assigning entire 4x4 grids (independent of the target location) to the train, valid or test set. Specifically, we design the split such that the valid set contains at least one intersection (out of four) is not part of the train set. For the test set, all four intersections are novel. See our source code, available at URL ANONYMIZED, for more details on how this split is realized.",
"paragraph4 0.1ex plus0.1ex minus.1ex-1em Example",
"Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT",
"Guide: Hello, what are you near?",
"Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT",
"Tourist: Hello, in front of me is a Brooks Brothers",
"Tourist: ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT",
"Guide: Is that a shop or restaurant?",
"Tourist: ACTION:TURNLEFT",
"Tourist: It is a clothing shop.",
"Tourist: ACTION:TURNLEFT",
"Guide: You need to go to the intersection in the northwest corner of the map",
"Tourist: ACTION:TURNLEFT",
"Tourist: There appears to be a bank behind me.",
"Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT",
"Guide: Ok, turn left then go straight up that road",
"Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNRIGHT",
" ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT",
"Guide: There should be shops on two of the corners but you",
" need to go to the corner without a shop.",
"Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT",
"Guide: let me know when you get there.",
"Tourist: on my left is Radio city Music hall",
"Tourist: ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT",
"Tourist: I can't go straight any further.",
"Guide: ok. turn so that the theater is on your right.",
"Guide: then go straight",
"Tourist: That would be going back the way I came",
"Guide: yeah. I was looking at the wrong bank",
"Tourist: I'll notify when I am back at the brooks brothers, and the bank.",
"Tourist: ACTION:TURNRIGHT",
"Guide: make a right when the bank is on your left",
"Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNRIGHT",
"Tourist: Making the right at the bank.",
"Tourist: ACTION:FORWARD ACTION:FORWARD",
"Tourist: I can't go that way.",
"Tourist: ACTION:TURNLEFT",
"Tourist: Bank is ahead of me on the right",
"Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT",
"Guide: turn around on that intersection",
"Tourist: I can only go to the left or back the way I just came.",
"Tourist: ACTION:TURNLEFT",
"Guide: you're in the right place. do you see shops on the corners?",
"Guide: If you're on the corner with the bank, cross the street",
"Tourist: I'm back where I started by the shop and the bank.",
"Tourist: ACTION:TURNRIGHT",
"Guide: on the same side of the street?",
"Tourist: crossing the street now",
"Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT",
"Tourist: there is an I love new york shop across the street on the left from me now",
"Tourist: ACTION:TURNRIGHT ACTION:FORWARD",
"Guide: ok. I'll see if it's right.",
"Guide: EVALUATE_LOCATION",
"Guide: It's not right.",
"Tourist: What should I be on the look for?",
"Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT ACTION:TURNRIGHT",
"Guide: There should be shops on two corners but you need to be on one of the corners",
" without the shop.",
"Guide: Try the other corner.",
"Tourist: this intersection has 2 shop corners and a bank corner",
"Guide: yes. that's what I see on the map.",
"Tourist: should I go to the bank corner? or one of the shop corners?",
" or the blank corner (perhaps a hotel)",
"Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT",
"Guide: Go to the one near the hotel. The map says the hotel is a little",
" further down but it might be a little off.",
"Tourist: It's a big hotel it's possible.",
"Tourist: ACTION:FORWARD ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNRIGHT",
"Tourist: I'm on the hotel corner",
"Guide: EVALUATE_LOCATION"
]
]
} | {
"question": [
"Did the authors use crowdsourcing platforms?",
"How was the dataset collected?",
"What language do the agents talk in?",
"What evaluation metrics did the authors look at?",
"What data did they use?"
],
"question_id": [
"0cd0755ac458c3bafbc70e4268c1e37b87b9721b",
"c1ce652085ef9a7f02cb5c363ce2b8757adbe213",
"96be67b1729c3a91ddf0ec7d6a80f2aa75e30a30",
"b85ab5f862221fac819cf2fef239bcb08b9cafc6",
"7e34501255b89d64b9598b409d73f96489aafe45"
],
"nlp_background": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"search_query": [
"",
"",
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs."
],
"highlighted_evidence": [
"We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk)."
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"We introduce the Talk the Walk dataset, where the aim is for two agents, a “guide” and a “tourist”, to interact with each other via natural language in order to achieve a common goal: having the tourist navigate towards the correct location. The guide has access to a map and knows the target location, but does not know where the tourist is; the tourist has a 360-degree view of the world, but knows neither the target location on the map nor the way to it. The agents need to work together through communication in order to successfully solve the task. An example of the task is given in Figure FIGREF3 .",
"We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs."
],
"highlighted_evidence": [
"We introduce the Talk the Walk dataset, where the aim is for two agents, a “guide” and a “tourist”, to interact with each other via natural language in order to achieve a common goal: having the tourist navigate towards the correct location.",
"We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs."
]
}
],
"annotation_id": [
"2e3c476fd6c267447136656da446e9bb41953f03",
"83b6b215aff8b6d9e9fa3308c962e0a916725a78"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs."
],
"highlighted_evidence": [
"We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk)."
]
}
],
"annotation_id": [
"73af0af52c32977bb9ccbd3aa9fb3294b5883647"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "English",
"evidence": [
"Tourist: I can't go straight any further.",
"Guide: ok. turn so that the theater is on your right.",
"Guide: then go straight",
"Tourist: That would be going back the way I came",
"Guide: yeah. I was looking at the wrong bank",
"Tourist: I'll notify when I am back at the brooks brothers, and the bank.",
"Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT",
"Guide: make a right when the bank is on your left",
"Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNRIGHT",
"Tourist: Making the right at the bank.",
"Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT",
"Tourist: I can't go that way.",
"Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT",
"Tourist: Bank is ahead of me on the right",
"Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT",
"Guide: turn around on that intersection",
"Tourist: I can only go to the left or back the way I just came.",
"Guide: you're in the right place. do you see shops on the corners?",
"Guide: If you're on the corner with the bank, cross the street",
"Tourist: I'm back where I started by the shop and the bank."
],
"highlighted_evidence": [
"Tourist: I can't go straight any further.\n\nGuide: ok. turn so that the theater is on your right.\n\nGuide: then go straight\n\nTourist: That would be going back the way I came\n\nGuide: yeah. I was looking at the wrong bank\n\nTourist: I'll notify when I am back at the brooks brothers, and the bank.\n\nTourist: ACTION:TURNRIGHT\n\nGuide: make a right when the bank is on your left\n\nTourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNRIGHT\n\nTourist: Making the right at the bank.\n\nTourist: ACTION:FORWARD ACTION:FORWARD\n\nTourist: I can't go that way.\n\nTourist: ACTION:TURNLEFT\n\nTourist: Bank is ahead of me on the right\n\nTourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT\n\nGuide: turn around on that intersection\n\nTourist: I can only go to the left or back the way I just came.\n\nTourist: ACTION:TURNLEFT\n\nGuide: you're in the right place. do you see shops on the corners?\n\nGuide: If you're on the corner with the bank, cross the street\n\nTourist: I'm back where I started by the shop and the bank.\n\nTourist: ACTION:TURNRIGHT"
]
}
],
"annotation_id": [
"d214afafe6bd69ae7f9c19125ce11b923ef6e105"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"localization accuracy"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In this section, we describe the findings of various experiments. First, we analyze how much information needs to be communicated for accurate localization in the Talk The Walk environment, and find that a short random path (including actions) is necessary. Next, for emergent language, we show that the MASC architecture can achieve very high localization accuracy, significantly outperforming the baseline that does not include this mechanism. We then turn our attention to the natural language experiments, and find that localization from human utterances is much harder, reaching an accuracy level that is below communicating a single landmark observation. We show that generated utterances from a conditional language model leads to significantly better localization performance, by successfully grounding the utterance on a single landmark observation (but not yet on multiple observations and actions). Finally, we show performance of the localization baseline on the full task, which can be used for future comparisons to this work."
],
"highlighted_evidence": [
"Next, for emergent language, we show that the MASC architecture can achieve very high localization accuracy, significantly outperforming the baseline that does not include this mechanism."
]
}
],
"annotation_id": [
"4acbf4a7c3f8dc02bc259031930c18db54159fa1"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" dataset on Mechanical Turk involving human perception, action and communication"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Talk The Walk is the first task to bring all three aspects together: perception for the tourist observing the world, action for the tourist to navigate through the environment, and interactive dialogue for the tourist and guide to work towards their common goal. To collect grounded dialogues, we constructed a virtual 2D grid environment by manually capturing 360-views of several neighborhoods in New York City (NYC). As the main focus of our task is on interactive dialogue, we limit the difficulty of the control problem by having the tourist navigating a 2D grid via discrete actions (turning left, turning right and moving forward). Our street view environment was integrated into ParlAI BIBREF6 and used to collect a large-scale dataset on Mechanical Turk involving human perception, action and communication."
],
"highlighted_evidence": [
" Our street view environment was integrated into ParlAI BIBREF6 and used to collect a large-scale dataset on Mechanical Turk involving human perception, action and communication."
]
}
],
"annotation_id": [
"09a25106160ae412e6a625f9b056e12d2f98ec82"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
} | {
"caption": [
"Figure 1: Example of the Talk The Walk task: two agents, a “tourist” and a “guide”, interact with each other via natural language in order to have the tourist navigate towards the correct location. The guide has access to a map and knows the target location but not the tourist location, while the tourist does not know the way but can navigate in a 360-degree street view environment.",
"Table 1: Talk The Walk grounds human generated dialogue in (real-life) perception and action.",
"Table 2: Accuracy results for tourist localization with emergent language, showing continuous (Cont.) and discrete (Disc.) communication, along with the prediction upper bound. T denotes the length of the path and a 3 in the “MASC” column indicates that the model is conditioned on the communicated actions.",
"Table 3: Localization accuracy of tourist communicating in natural language.",
"Table 5: Localization given last {1, 3, 5} dialogue utterances (including the guide). We observe that 1) performance increases when more utterances are included; and 2) MASC outperforms no-MASC in all cases; and 3) mean T̂ increases when more dialogue context is included.",
"Table 7: Full task performance of localization models trained on human and random trajectories. There are small benefits for training on random trajectories, but the most important hyperparameter is to condition the tourist utterance on a single observation (i.e. trajectories of size T = 0.)",
"Table 6: Localization performance using pretrained tourist (via imitation learning) with beam search decoding of varying beam size. We find that larger beam-sizes lead to worse localization performance.",
"Table 8: Samples from the tourist models communicating in natural language.",
"Figure 2: We show MASC values of two action sequences for tourist localization via discrete communication with T = 3 actions. In general, we observe that the first action always corresponds to the correct state-transition, whereas the second and third are sometimes mixed. For instance, in the top example, the first two actions are correctly predicted but the third action is not (as the MASC corresponds to a “no action”). In the bottom example, the second action appears as the third MASC.",
"Table 9: Accuracy of localization models on full task, using evaluation protocol defined in Algorithm 1. We report the average over 3 runs.",
"Figure 3: Result of running the text recognizer of [20] on four examples of the Hell’s Kitchen neighborhood. Top row: two positive examples. Bottom row: example of false negative (left) and many false positives (right)",
"Figure 4: Frequency of landmark classes",
"Table 10: Results for landmark classification.",
"Figure 5: Map of New York City with red rectangles indicating the captured neighborhoods of the Talk The Walk dataset.",
"Figure 6: Set of instructions presented to turkers before starting their first task.",
"Figure 7: (cont.) Set of instructions presented to turkers before starting their first task."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"13-Table5-1.png",
"13-Table7-1.png",
"13-Table6-1.png",
"14-Table8-1.png",
"14-Figure2-1.png",
"15-Table9-1.png",
"17-Figure3-1.png",
"18-Figure4-1.png",
"18-Table10-1.png",
"19-Figure5-1.png",
"21-Figure6-1.png",
"22-Figure7-1.png"
]
} |
1907.02030 | Real-time Claim Detection from News Articles and Retrieval of Semantically-Similar Factchecks | Factchecking has always been a part of the journalistic process. However with newsroom budgets shrinking it is coming under increasing pressure just as the amount of false information circulating is on the rise. We therefore propose a method to increase the efficiency of the factchecking process, using the latest developments in Natural Language Processing (NLP). This method allows us to compare incoming claims to an existing corpus and return similar, factchecked, claims in a live system-allowing factcheckers to work simultaneously without duplicating their work. | {
"section_name": [
"Introduction",
"Related Work",
"Method",
"Choosing an embedding",
"Clustering Method",
"Next Steps"
],
"paragraphs": [
[
"In recent years, the spread of misinformation has become a growing concern for researchers and the public at large BIBREF1 . Researchers at MIT found that social media users are more likely to share false information than true information BIBREF2 . Due to renewed focus on finding ways to foster healthy political conversation, the profile of factcheckers has been raised.",
"Factcheckers positively influence public debate by publishing good quality information and asking politicians and journalists to retract misleading or false statements. By calling out lies and the blurring of the truth, they make those in positions of power accountable. This is a result of labour intensive work that involves monitoring the news for spurious claims and carrying out rigorous research to judge credibility. So far, it has only been possible to scale their output upwards by hiring more personnel. This is problematic because newsrooms need significant resources to employ factcheckers. Publication budgets have been decreasing, resulting in a steady decline in the size of their workforce BIBREF0 . Factchecking is not a directly profitable activity, which negatively affects the allocation of resources towards it in for-profit organisations. It is often taken on by charities and philanthropists instead.",
"To compensate for this shortfall, our strategy is to harness the latest developments in NLP to make factchecking more efficient and therefore less costly. To this end, the new field of automated factchecking has captured the imagination of both non-profits and start-ups BIBREF3 , BIBREF4 , BIBREF5 . It aims to speed up certain aspects of the factchecking process rather than create AI that can replace factchecking personnel. This includes monitoring claims that are made in the news, aiding decisions about which statements are the most important to check and automatically retrieving existing factchecks that are relevant to a new claim.",
"The claim detection and claim clustering methods that we set out in this paper can be applied to each of these. We sought to devise a system that would automatically detect claims in articles and compare them to previously submitted claims. Storing the results to allow a factchecker's work on one of these claims to be easily transferred to others in the same cluster."
],
[
"It is important to decide what sentences are claims before attempting to cluster them. The first such claim detection system to have been created is ClaimBuster BIBREF6 , which scores sentences with an SVM to determine how likely they are to be politically pertinent statements. Similarly, ClaimRank BIBREF7 uses real claims checked by factchecking institutions as training data in order to surface sentences that are worthy of factchecking.",
"These methods deal with the question of what is a politically interesting claim. In order to classify the objective qualities of what set apart different types of claims, the ClaimBuster team created PolitiTax BIBREF8 , a taxonomy of claims, and factchecking organisation Full Fact BIBREF9 developed their preferred annotation schema for statements in consultation with their own factcheckers. This research provides a more solid framework within which to construct claim detection classifiers.",
"The above considers whether or not a sentence is a claim, but often claims are subsections of sentences and multiple claims might be found in one sentence. In order to accommodate this, BIBREF10 proposes extracting phrases called Context Dependent Claims (CDC) that are relevant to a certain `Topic'. Along these lines, BIBREF11 proposes new definitions for frames to be incorporated into FrameNet BIBREF12 that are specific to facts, in particular those found in a political context.",
"Traditional text clustering methods, using TFIDF and some clustering algorithm, are poorly suited to the problem of clustering and comparing short texts, as they can be semantically very similar but use different words. This is a manifestation of the the data sparsity problem with Bag-of-Words (BoW) models. BIBREF16 . Dimensionality reduction methods such as Latent Dirichlet Allocation (LDA) can help solve this problem by giving a dense approximation of this sparse representation BIBREF17 . More recently, efforts in this area have used text embedding-based systems in order to capture dense representation of the texts BIBREF18 . Much of this recent work has relied on the increase of focus in word and text embeddings. Text embeddings have been an increasingly popular tool in NLP since the introduction of Word2Vec BIBREF19 , and since then the number of different embeddings has exploded. While many focus on giving a vector representation of a word, an increasing number now exist that will give a vector representation of a entire sentence or text. Following on from this work, we seek to devise a system that can run online, performing text clustering on the embeddings of texts one at a time",
"Some considerations to bear in mind when deciding on an embedding scheme to use are: the size of the final vector, the complexity of the model itself and, if using a pretrained implementation, the data the model has been trained on and whether it is trained in a supervised or unsupervised manner.",
"The size of the embedding can have numerous results downstream. In our example we will be doing distance calculations on the resultant vectors and therefore any increase in length will increase the complexity of those distance calculations. We would therefore like as short a vector as possible, but we still wish to capture all salient information about the claim; longer vectors have more capacity to store information, both salient and non-salient.",
"A similar effect is seen for the complexity of the model. A more complicated model, with more trainable parameters, may be able to capture finer details about the text, but it will require a larger corpus to achieve this, and will require more computational time to calculate the embeddings. We should therefore attempt to find the simplest embedding system that can accurately solve our problem.",
"When attempting to use pretrained models to help in other areas, it is always important to ensure that the models you are using are trained on similar material, to increase the chance that their findings will generalise to the new problem. Many unsupervised text embeddings are trained on the CommonCrawl dataset of approx. 840 billion tokens. This gives a huge amount of data across many domains, but requires a similarly huge amount of computing power to train on the entire dataset. Supervised datasets are unlikely ever to approach such scale as they require human annotations which can be expensive to assemble. The SNLI entailment dataset is an example of a large open source dataset BIBREF20 . It features pairs of sentences along with labels specifying whether or not one entails the other. Google's Universal Sentence Encoder (USE) BIBREF14 is a sentence embedding created with a hybrid supervised/unsupervised method, leveraging both the vast amounts of unsupervised training data and the extra detail that can be derived from a supervised method. The SNLI dataset and the related MultiNLI dataset are often used for this because textual entailment is seen as a good basis for general Natural Language Understanding (NLU) BIBREF21 ."
],
[
"It is much easier to build a dataset and reliably evaluate a model if the starting definitions are clear and objective. Questions around what is an interesting or pertinent claim are inherently subjective. For example, it is obvious that a politician will judge their opponents' claims to be more important to factcheck than their own.",
"Therefore, we built on the methodologies that dealt with the objective qualities of claims, which were the PolitiTax and Full Fact taxonomies. We annotated sentences from our own database of news articles based on a combination of these. We also used the Full Fact definition of a claim as a statement about the world that can be checked. Some examples of claims according to this definition are shown in Table TABREF3 . We decided the first statement was a claim since it declares the occurrence of an event, while the second was considered not to be a claim as it is an expression of feeling.",
"Full Fact's approach centred around using sentence embeddings as a feature engineering step, followed by a simple classifier such as logistic regression, which is what we used. They used Facebook's sentence embeddings, InferSent BIBREF13 , which was a recent breakthrough at the time. Such is the speed of new development in the field that since then, several papers describing textual embeddings have been published. Due to the fact that we had already evaluated embeddings for clustering, and therefore knew our system would rely on Google USE Large BIBREF14 , we decided to use this instead. We compared this to TFIDF and Full Fact's results as baselines. The results are displayed in Table TABREF4 .",
"However, ClaimBuster and Full Fact focused on live factchecking of TV debates. Logically is a news aggregator and we analyse the bodies of published news stories. We found that in our corpus, the majority of sentences are claims and therefore our model needed to be as selective as possible. In practice, we choose to filter out sentences that are predictions since generally the substance of the claim cannot be fully checked until after the event has occurred. Likewise, we try to remove claims based on personal experience or anecdotal evidence as they are difficult to verify."
],
[
"In order to choose an embedding, we sought a dataset to represent our problem. Although no perfect matches exist, we decided upon the Quora duplicate question dataset BIBREF22 as the best match. To study the embeddings, we computed the euclidean distance between the two questions using various embeddings, to study the distance between semantically similar and dissimilar questions.",
"The graphs in figure 1 show the distances between duplicate and non-duplicate questions using different embedding systems. The X axis shows the euclidean distance between vectors and the Y axis frequency. A perfect result would be a blue peak to the left and an entirely disconnected orange spike to the right, showing that all non-duplicate questions have a greater euclidean distance than the least similar duplicate pair of questions. As can be clearly seen in the figure above, Elmo BIBREF23 and Infersent BIBREF13 show almost no separation and therefore cannot be considered good models for this problem. A much greater disparity is shown by the Google USE models BIBREF14 , and even more for the Google USE Large model. In fact the Google USE Large achieved a F1 score of 0.71 for this task without any specific training, simply by choosing a threshold below which all sentence pairs are considered duplicates.",
"In order to test whether these results generalised to our domain, we devised a test that would make use of what little data we had to evaluate. We had no original data on whether sentences were semantically similar, but we did have a corpus of articles clustered into stories. Working on the assumption that similar claims would be more likely to be in the same story, we developed an equation to judge how well our corpus of sentences was clustered, rewarding clustering which matches the article clustering and the total number of claims clustered. The precise formula is given below, where INLINEFORM0 is the proportion of claims in clusters from one story cluster, INLINEFORM1 is the proportion of claims in the correct claim cluster, where they are from the most common story cluster, and INLINEFORM2 is the number of claims placed in clusters. A,B and C are parameters to tune. INLINEFORM3 ",
" figureFormula to assess the correctness of claim clusters based on article clusters",
"This method is limited in how well it can represent the problem, but it can give indications as to a good or bad clustering method or embedding, and can act as a check that the findings we obtained from the Quora dataset will generalise to our domain. We ran code which vectorized 2,000 sentences and then used the DBScan clustering method BIBREF24 to cluster using a grid search to find the best INLINEFORM0 value, maximizing this formula. We used DBScan as it mirrored the clustering method used to derive the original article clusters. The results for this experiment can be found in Table TABREF10 . We included TFIDF in the experiment as a baseline to judge other results. It is not suitable for our eventual purposes, but it the basis of the original keyword-based model used to build the clusters . That being said, TFIDF performs very well, with only Google USE Large and Infersent coming close in terms of `accuracy'. In the case of Infersent, this comes with the penalty of a much smaller number of claims included in the clusters. Google USE Large, however, clusters a greater number and for this reason we chose to use Google's USE Large. ",
"Since Google USE Large was the best-performing embedding in both the tests we devised, this was our chosen embedding to use for clustering. However as can be seen from the results shown above, this is not a perfect solution and the inaccuracy here will introduce inaccuracy further down the clustering pipeline."
],
[
"We decided to follow a methodology upon the DBScan method of clustering BIBREF24 . DBScan considers all distances between pairs of points. If they are under INLINEFORM0 then those two are linked. Once the number of connected points exceeds a minimum size threshold, they are considered a cluster and all other points are considered to be unclustered. This method is advantageous for our purposes because unlike other methods, such as K-Means, it does not require the number of clusters to be specified. To create a system that can build clusters dynamically, adding one point at a time, we set the minimum cluster size to one, meaning that every point is a member of a cluster.",
"A potential disadvantage of this method is that because points require only one connection to a cluster to join it, they may only be related to one point in the cluster, but be considered in the same cluster as all of them. In small examples this is not a problem as all points in the cluster should be very similar. However as the number of points being considered grows, this behaviour raises the prospect of one or several borderline clustering decisions leading to massive clusters made from tenuous connections between genuine clusters. To mitigate this problem we used a method described in the Newslens paper BIBREF25 to solve a similar problem when clustering entire articles. We stored all of our claims in a graph with the connections between them added when the distance between them was determined to be less than INLINEFORM0 . To determine the final clusters we run a Louvain Community Detection BIBREF26 over this graph to split it into defined communities. This improved the compactness of a cluster. When clustering claims one by one, this algorithm can be performed on the connected subgraph featuring the new claim, to reduce the computation required.",
"As this method involves distance calculations between the claim being added and every existing claim, the time taken to add one claim will increase roughly linearly with respect to the number of previous claims. Through much optimization we have brought the computational time down to approximately 300ms per claim, which stays fairly static with respect to the number of previous claims."
],
[
"The clustering described above is heavily dependent on the embedding used. The rate of advances in this field has been rapid in recent years, but an embedding will always be an imperfect representation of an claim and therefore always an area of improvement. A domain specific-embedding will likely offer a more accurate representation but creates problems with clustering claims from different domains. They also require a huge amount of data to give a good model and that is not possible in all domains."
]
]
} | {
"question": [
"Do the authors report results only on English data?",
"How is the accuracy of the system measured?",
"How is an incoming claim used to retrieve similar factchecked claims?",
"What existing corpus is used for comparison in these experiments?",
"What are the components in the factchecking algorithm? "
],
"question_id": [
"e854edcc5e9111922e6e120ae17d062427c27ec1",
"bd6cec2ab620e67b3e0e7946fc045230e6906020",
"4b0ba460ae3ba7a813f204abd16cf631b871baca",
"63b0c93f0452d0e1e6355de1d0f3ff0fd67939fb",
"d27f23bcd80b12f6df8e03e65f9b150444925ecf"
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"topic_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"search_query": [
"",
"",
"",
"",
""
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
},
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"41a7d562fe15587355c173f7c19c2141377b456c",
"9704c613a01e2cc0a46f167ab55ed1b1c32acc6b"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"F1 score of 0.71 for this task without any specific training, simply by choosing a threshold below which all sentence pairs are considered duplicates",
"distances between duplicate and non-duplicate questions using different embedding systems"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The graphs in figure 1 show the distances between duplicate and non-duplicate questions using different embedding systems. The X axis shows the euclidean distance between vectors and the Y axis frequency. A perfect result would be a blue peak to the left and an entirely disconnected orange spike to the right, showing that all non-duplicate questions have a greater euclidean distance than the least similar duplicate pair of questions. As can be clearly seen in the figure above, Elmo BIBREF23 and Infersent BIBREF13 show almost no separation and therefore cannot be considered good models for this problem. A much greater disparity is shown by the Google USE models BIBREF14 , and even more for the Google USE Large model. In fact the Google USE Large achieved a F1 score of 0.71 for this task without any specific training, simply by choosing a threshold below which all sentence pairs are considered duplicates.",
"In order to test whether these results generalised to our domain, we devised a test that would make use of what little data we had to evaluate. We had no original data on whether sentences were semantically similar, but we did have a corpus of articles clustered into stories. Working on the assumption that similar claims would be more likely to be in the same story, we developed an equation to judge how well our corpus of sentences was clustered, rewarding clustering which matches the article clustering and the total number of claims clustered. The precise formula is given below, where INLINEFORM0 is the proportion of claims in clusters from one story cluster, INLINEFORM1 is the proportion of claims in the correct claim cluster, where they are from the most common story cluster, and INLINEFORM2 is the number of claims placed in clusters. A,B and C are parameters to tune. INLINEFORM3"
],
"highlighted_evidence": [
"The graphs in figure 1 show the distances between duplicate and non-duplicate questions using different embedding systems.",
"Large achieved a F1 score of 0.71 for this task without any specific training, simply by choosing a threshold below which all sentence pairs are considered duplicates.",
"In order to test whether these results generalised to our domain, we devised a test that would make use of what little data we had to evaluate."
]
}
],
"annotation_id": [
"24d5fc7b9976c06aa1cafeb0cfc516af8fcb8a43"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"text clustering on the embeddings of texts"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Traditional text clustering methods, using TFIDF and some clustering algorithm, are poorly suited to the problem of clustering and comparing short texts, as they can be semantically very similar but use different words. This is a manifestation of the the data sparsity problem with Bag-of-Words (BoW) models. BIBREF16 . Dimensionality reduction methods such as Latent Dirichlet Allocation (LDA) can help solve this problem by giving a dense approximation of this sparse representation BIBREF17 . More recently, efforts in this area have used text embedding-based systems in order to capture dense representation of the texts BIBREF18 . Much of this recent work has relied on the increase of focus in word and text embeddings. Text embeddings have been an increasingly popular tool in NLP since the introduction of Word2Vec BIBREF19 , and since then the number of different embeddings has exploded. While many focus on giving a vector representation of a word, an increasing number now exist that will give a vector representation of a entire sentence or text. Following on from this work, we seek to devise a system that can run online, performing text clustering on the embeddings of texts one at a time"
],
"highlighted_evidence": [
"While many focus on giving a vector representation of a word, an increasing number now exist that will give a vector representation of a entire sentence or text. Following on from this work, we seek to devise a system that can run online, performing text clustering on the embeddings of texts one at a time"
]
}
],
"annotation_id": [
"4a164a1c5aedb2637432d8f4190f2d038e6c1471"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Quora duplicate question dataset BIBREF22"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In order to choose an embedding, we sought a dataset to represent our problem. Although no perfect matches exist, we decided upon the Quora duplicate question dataset BIBREF22 as the best match. To study the embeddings, we computed the euclidean distance between the two questions using various embeddings, to study the distance between semantically similar and dissimilar questions."
],
"highlighted_evidence": [
"Although no perfect matches exist, we decided upon the Quora duplicate question dataset BIBREF22 as the best match. To study the embeddings, we computed the euclidean distance between the two questions using various embeddings, to study the distance between semantically similar and dissimilar questions."
]
}
],
"annotation_id": [
"7b5110b00804dbe63211ead9d0a0ec34c77fafa8"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"16131c760b6070bed02968f7f183b9c3841ad1d4"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Table 1: Examples of claims taken from real articles.",
"Table 2: Claim Detection Results.",
"Figure 1: Analysis of Different Embeddings on the Quora Question Answering Dataset",
"Table 3: Comparing Sentence Embeddings for Clustering News Claims."
],
"file": [
"2-Table1-1.png",
"2-Table2-1.png",
"3-Figure1-1.png",
"4-Table3-1.png"
]
} |
1910.04601 | RC-QED: Evaluating Natural Language Derivations in Multi-Hop Reading Comprehension | Recent studies revealed that reading comprehension (RC) systems learn to exploit annotation artifacts and other biases in current datasets. This allows systems to "cheat" by employing simple heuristics to answer questions, e.g. by relying on semantic type consistency. This means that current datasets are not well-suited to evaluate RC systems. To address this issue, we introduce RC-QED, a new RC task that requires giving not only the correct answer to a question, but also the reasoning employed for arriving at this answer. For this, we release a large benchmark dataset consisting of 12,000 answers and corresponding reasoning in form of natural language derivations. Experiments show that our benchmark is robust to simple heuristics and challenging for state-of-the-art neural path ranking approaches. | {
"section_name": [
"Introduction",
"Task formulation: RC-QED ::: Input, output, and evaluation metrics",
"Task formulation: RC-QED ::: RC-QED@!START@$^{\\rm E}$@!END@",
"Data collection for RC-QED@!START@$^{\\rm E}$@!END@ ::: Crowdsourcing interface",
"Data collection for RC-QED@!START@$^{\\rm E}$@!END@ ::: Crowdsourcing interface ::: Judgement task (Figure @!START@UID13@!END@).",
"Data collection for RC-QED@!START@$^{\\rm E}$@!END@ ::: Crowdsourcing interface ::: Derivation task (Figure @!START@UID14@!END@).",
"Data collection for RC-QED@!START@$^{\\rm E}$@!END@ ::: Dataset",
"Data collection for RC-QED@!START@$^{\\rm E}$@!END@ ::: Results",
"Data collection for RC-QED@!START@$^{\\rm E}$@!END@ ::: Results ::: Quality",
"Data collection for RC-QED@!START@$^{\\rm E}$@!END@ ::: Results ::: Agreement",
"Baseline RC-QED@!START@$^{\\rm E}$@!END@ model",
"Baseline RC-QED@!START@$^{\\rm E}$@!END@ model ::: Knowledge graph construction",
"Baseline RC-QED@!START@$^{\\rm E}$@!END@ model ::: Path ranking-based KGC (PRKGC)",
"Baseline RC-QED@!START@$^{\\rm E}$@!END@ model ::: Training",
"Baseline RC-QED@!START@$^{\\rm E}$@!END@ model ::: Training ::: Semi-supervising derivations",
"Experiments ::: Settings ::: Dataset",
"Experiments ::: Settings ::: Hyperparameters",
"Experiments ::: Settings ::: Baseline",
"Experiments ::: Results and discussion",
"Experiments ::: Results and discussion ::: QA performance.",
"Related work ::: RC datasets with explanations",
"Related work ::: Analysis of RC models and datasets",
"Related work ::: Other NLP corpora annotated with explanations",
"Conclusions",
"Example annotations"
],
"paragraphs": [
[
"Reading comprehension (RC) has become a key benchmark for natural language understanding (NLU) systems and a large number of datasets are now available BIBREF0, BIBREF1, BIBREF2. However, these datasets suffer from annotation artifacts and other biases, which allow systems to “cheat”: Instead of learning to read texts, systems learn to exploit these biases and find answers via simple heuristics, such as looking for an entity with a matching semantic type BIBREF3, BIBREF4. To give another example, many RC datasets contain a large number of “easy” problems that can be solved by looking at the first few words of the question Sugawara2018. In order to provide a reliable measure of progress, an RC dataset thus needs to be robust to such simple heuristics.",
"Towards this goal, two important directions have been investigated. One direction is to improve the dataset itself, for example, so that it requires an RC system to perform multi-hop inferences BIBREF0 or to generate answers BIBREF1. Another direction is to request a system to output additional information about answers. Yang2018HotpotQA:Answering propose HotpotQA, an “explainable” multi-hop Question Answering (QA) task that requires a system to identify a set of sentences containing supporting evidence for the given answer. We follow the footsteps of Yang2018HotpotQA:Answering and explore an explainable multi-hop QA task.",
"In the community, two important types of explanations have been explored so far BIBREF5: (i) introspective explanation (how a decision is made), and (ii) justification explanation (collections of evidences to support the decision). In this sense, supporting facts in HotpotQA can be categorized as justification explanations. The advantage of using justification explanations as benchmark is that the task can be reduced to a standard classification task, which enables us to adopt standard evaluation metrics (e.g. a classification accuracy). However, this task setting does not evaluate a machine's ability to (i) extract relevant information from justification sentences and (ii) synthesize them to form coherent logical reasoning steps, which are equally important for NLU.",
"To address this issue, we propose RC-QED, an RC task that requires not only the answer to a question, but also an introspective explanation in the form of a natural language derivation (NLD). For example, given the question “Which record company released the song Barracuda?” and supporting documents shown in Figure FIGREF1, a system needs to give the answer “Portrait Records” and to provide the following NLD: 1.) Barracuda is on Little Queen, and 2.) Little Queen was released by Portrait Records.",
"The main difference between our work and HotpotQA is that they identify a set of sentences $\\lbrace s_2,s_4\\rbrace $, while RC-QED requires a system to generate its derivations in a correct order. This generation task enables us to measure a machine's logical reasoning ability mentioned above. Due to its subjective nature of the natural language derivation task, we evaluate the correctness of derivations generated by a system with multiple reference answers. Our contributions can be summarized as follows:",
"We create a large corpus consisting of 12,000 QA pairs and natural language derivations. The developed crowdsourcing annotation framework can be used for annotating other QA datasets with derivations.",
"Through an experiment using two baseline models, we highlight several challenges of RC-QED.",
"We will make the corpus of reasoning annotations and the baseline system publicly available at https://naoya-i.github.io/rc-qed/."
],
[
"We formally define RC-QED as follows:",
"Given: (i) a question $Q$, and (ii) a set $S$ of supporting documents relevant to $Q$;",
"Find: (i) answerability $s \\in \\lbrace \\textsf {Answerable},$ $\\textsf {Unanswerable} \\rbrace $, (ii) an answer $a$, and (iii) a sequence $R$ of derivation steps.",
"We evaluate each prediction with the following evaluation metrics:",
"Answerability: Correctness of model's decision on answerability (i.e. binary classification task) evaluated by Precision/Recall/F1.",
"Answer precision: Correctness of predicted answers (for Answerable predictions only). We follow the standard practice of RC community for evaluation (e.g. an accuracy in the case of multiple choice QA).",
"Derivation precision: Correctness of generated NLDs evaluated by ROUGE-L BIBREF6 (RG-L) and BLEU-4 (BL-4) BIBREF7. We follow the standard practice of evaluation for natural language generation BIBREF1. Derivation steps might be subjective, so we resort to multiple reference answers."
],
[
"This paper instantiates RC-QED by employing multiple choice, entity-based multi-hop QA BIBREF0 as a testbed (henceforth, RC-QED$^{\\rm E}$). In entity-based multi-hop QA, machines need to combine relational facts between entities to derive an answer. For example, in Figure FIGREF1, understanding the facts about Barracuda, Little Queen, and Portrait Records stated in each article is required. This design choice restricts a problem domain, but it provides interesting challenges as discussed in Section SECREF46. In addition, such entity-based chaining is known to account for the majority of reasoning types required for multi-hop reasoning BIBREF2.",
"More formally, given (i) a question $Q=(r, q)$ represented by a binary relation $r$ and an entity $q$ (question entity), (ii) relevant articles $S$, and (iii) a set $C$ of candidate entities, systems are required to output (i) an answerability $s \\in \\lbrace \\textsf {Answerable}, \\textsf {Unanswerable} \\rbrace $, (ii) an entity $e \\in C$ (answer entity) that $(q, r, e)$ holds, and (iii) a sequence $R$ of derivation steps as to why $e$ is believed to be an answer. We define derivation steps as an $m$ chain of relational facts to derive an answer, i.e. $(q, r_1, e_1), (e_1, r_2, e_2), ..., (e_{m-1}, r_{m-1}, e_m),$ $(e_m, r_m, e_{m+1}))$. Although we restrict the form of knowledge to entity relations, we use a natural language form to represent $r_i$ rather than a closed vocabulary (see Figure FIGREF1 for an example)."
],
[
"To acquire a large-scale corpus of NLDs, we use crowdsourcing (CS). Although CS is a powerful tool for large-scale dataset creation BIBREF2, BIBREF8, quality control for complex tasks is still challenging. We thus carefully design an incentive structure for crowdworkers, following Yang2018HotpotQA:Answering.",
"Initially, we provide crowdworkers with an instruction with example annotations, where we emphasize that they judge the truth of statements solely based on given articles, not based on their own knowledge."
],
[
"Given a statement and articles, workers are asked to judge whether the statement can be derived from the articles at three grades: True, Likely (i.e. Answerable), or Unsure (i.e. Unanswerable). If a worker selects Unsure, we ask workers to tell us why they are unsure from two choices (“Not stated in the article” or “Other”)."
],
[
"If a worker selects True or Likely in the judgement task, we first ask which sentences in the given articles are justification explanations for a given statement, similarly to HotpotQA BIBREF2. The “summary” text boxes (i.e. NLDs) are then initialized with these selected sentences. We give a ¢6 bonus to those workers who select True or Likely. To encourage an abstraction of selected sentences, we also introduce a gamification scheme to give a bonus to those who provide shorter NLDs. Specifically, we probabilistically give another ¢14 bonus to workers according to a score they gain. The score is always shown on top of the screen, and changes according to the length of NLDs they write in real time. To discourage noisy annotations, we also warn crowdworkers that their work would be rejected for noisy submissions. We periodically run simple filtering to exclude noisy crowdworkers (e.g. workers who give more than 50 submissions with the same answers).",
"We deployed the task on Amazon Mechanical Turk (AMT). To see how reasoning varies across workers, we hire 3 crowdworkers per one instance. We hire reliable crowdworkers with $\\ge 5,000$ HITs experiences and an approval rate of $\\ge $ 99.0%, and pay ¢20 as a reward per instance.",
"Our data collection pipeline is expected to be applicable to other types of QAs other than entity-based multi-hop QA without any significant extensions, because the interface is not specifically designed for entity-centric reasoning."
],
[
"Our study uses WikiHop BIBREF0, as it is an entity-based multi-hop QA dataset and has been actively used. We randomly sampled 10,000 instances from 43,738 training instances and 2,000 instances from 5,129 validation instances (i.e. 36,000 annotation tasks were published on AMT). We manually converted structured WikiHop question-answer pairs (e.g. locatedIn(Macchu Picchu, Peru)) into natural language statements (Macchu Picchu is located in Peru) using a simple conversion dictionary.",
"We use supporting documents provided by WikiHop. WikiHop collects supporting documents by finding Wikipedia articles that bridges a question entity $e_i$ and an answer entity $e_j$, where the link between articles is given by a hyperlink."
],
[
"Table TABREF17 shows the statistics of responses and example annotations. Table TABREF17 also shows the abstractiveness of annotated NLDs ($a$), namely the number of tokens in an NLD divided by the number of tokens in its corresponding justification sentences. This indicates that annotated NLDs are indeed summarized. See Table TABREF53 in Appendix and Supplementary Material for more results."
],
[
"To evaluate the quality of annotation results, we publish another CS task on AMT. We randomly sample 300 True and Likely responses in this evaluation. Given NLDs and a statement, 3 crowdworkers are asked if the NLDs can lead to the statement at four scale levels. If the answer is 4 or 3 (“yes” or “likely”), we additionally asked whether each derivation step can be derived from each supporting document; otherwise we asked them the reasons. For a fair evaluation, we encourage crowdworkers to annotate given NLDs with a lower score by stating that we give a bonus if they found a flaw of reasoning on the CS interface.",
"The evaluation results shown in Table TABREF24 indicate that the annotated NLDs are of high quality (Reachability), and each NLD is properly derived from supporting documents (Derivability).",
"On the other hand, we found the quality of 3-step NLDs is relatively lower than the others. Crowdworkers found that 45.3% of 294 (out of 900) 3-step NLDs has missing steps to derive a statement. Let us consider this example: for annotated NLDs “[1] Kouvola is located in Helsinki. [2] Helsinki is in the region of Uusimaa. [3] Uusimaa borders the regions Southwest Finland, Kymenlaakso and some others.” and for the statement “Kouvola is located in Kymenlaakso”, one worker pointed out the missing step “Uusimaa is in Kymenlaakso.”. We speculate that greater steps of reasoning make it difficult for crowdworkers to check the correctness of derivations during the writing task."
],
[
"For agreement on the number of NLDs, we obtained a Krippendorff's $\\alpha $ of 0.223, indicating a fair agreement BIBREF9.",
"Our manual inspection of the 10 worst disagreements revealed that majority (7/10) come from Unsure v.s. non-Unsure. It also revealed that crowdworkers who labeled non-Unsure are reliable—6 out 7 non-Unsure annotations can be judged as correct. This partially confirms the effectiveness of our incentive structure."
],
[
"To highlight the challenges and nature of RC-QED$^{\\rm E}$, we create a simple, transparent, and interpretable baseline model.",
"Recent studies on knowledge graph completion (KGC) explore compositional inferences to combat with the sparsity of knowledge bases BIBREF10, BIBREF11, BIBREF12. Given a query triplet $(h, r, t)$ (e.g. (Macchu Picchu, locatedIn, Peru)), a path ranking-based approach for KGC explicitly samples paths between $h$ and $t$ in a knowledge base (e.g. Macchu Picchu—locatedIn—Andes Mountain—countryOf—Peru), and construct a feature vector of these paths. This feature vector is then used to calculate the compatibility between the query triplet and the sampled paths.",
"RC-QED$^{\\rm E}$ can be naturally solved by path ranking-based KGC (PRKGC), where the query triplet and the sampled paths correspond to a question and derivation steps, respectively. PRKGC meets our purposes because of its glassboxness: we can trace the derivation steps of the model easily."
],
[
"Given supporting documents $S$, we build a knowledge graph. We first apply a coreference resolver to $S$ and then create a directed graph $G(S)$. Therein, each node represents named entities (NEs) in $S$, and each edge represents textual relations between NEs extracted from $S$. Figure FIGREF27 illustrates an example of $G(S)$ constructed from supporting documents in Figure FIGREF1."
],
[
"Given a question $Q=(q, r)$ and a candidate entity $c_i$, we estimate the plausibility of $(q, r, c_i)$ as follows:",
"where $\\sigma $ is a sigmoid function, and $\\mathbf {q, r, c_i}, \\mathbf {\\pi }(q, c_i)$ are vector representations of $q, r, c_i$ and a set $\\pi (q, c_i)$ of shortest paths between $q$ and $c_i$ on $G(S)$. ${\\rm MLP}(\\cdot , \\cdot )$ denotes a multi-layer perceptron. To encode entities into vectors $\\mathbf {q, c_i}$, we use Long-Short Term Memory (LSTM) and take its last hidden state. For example, in Figure FIGREF27, $q =$ Barracuda and $c_i =$ Portrait Records yield $\\pi (q, c_i) = \\lbrace $Barracuda—is the most popular in their album—Little Queen—was released in May 1977 on—Portrait Records, Barracuda—was released from American band Heart—is the second album released by:-1—Little Queen—was released in May 1977 on—Portrait Records$\\rbrace $.",
"To obtain path representations $\\mathbf {\\pi }(q, c_i)$, we attentively aggregate individual path representations: $\\mathbf {\\pi }(q, c_i) = \\sum _j \\alpha _j \\mathbf {\\pi _j}(q, c_i)$, where $\\alpha _j$ is an attention for the $j$-th path. The attention values are calculated as follows: $\\alpha _j = \\exp ({\\rm sc}(q, r, c_i, \\pi _j)) / \\sum _k \\exp ({\\rm sc}(q, r, c_i, \\pi _k))$, where ${\\rm sc}(q, r, c_i, \\pi _j) = {\\rm MLP}(\\mathbf {q}, \\mathbf {r}, \\mathbf {c_i}, \\mathbf {\\pi _j})$. To obtain individual path representations $\\mathbf {\\pi _j}$, we follow toutanova-etal-2015-representing. We use a Bi-LSTM BIBREF13 with mean pooling over timestep in order to encourage similar paths to have similar path representations.",
"For the testing phase, we choose a candidate entity $c_i$ with the maximum probability $P(r|q, c_i)$ as an answer entity, and choose a path $\\pi _j$ with the maximum attention value $\\alpha _j$ as NLDs. To generate NLDs, we simply traverse the path from $q$ to $c_i$ and subsequently concatenate all entities and textual relations as one string. We output Unanswerable when (i) $\\max _{c_i \\in C} P(r|q, c_i) < \\epsilon _k$ or (ii) $G(S)$ has no path between $q$ and all $c_i \\in C$."
],
[
"Let $\\mathcal {K}^+$ be a set of question-answer pairs, where each instance consists of a triplet (a query entity $q_i$, a relation $r_i$, an answer entity $a_i$). Similarly, let $\\mathcal {K}^-$ be a set of question-non-answer pairs. We minimize the following binary cross-entropy loss:",
"From the NLD point of view, this is unsupervised training. The model is expected to learn the score function ${\\rm sc(\\cdot )}$ to give higher scores to paths (i.e. NLD steps) that are useful for discriminating correct answers from wrong answers by its own. Highly scored NLDs might be useful for answer classification, but these are not guaranteed to be interpretable to humans."
],
[
"To address the above issue, we resort to gold-standard NLDs to guide the path scoring function ${\\rm sc(\\cdot )}$. Let $\\mathcal {D}$ be question-answer pairs coupled with gold-standard NLDs, namely a binary vector $\\mathbf {p}_i$, where the $j$-th value represents whether $j$-th path corresponds to a gold-standard NLD (1) or not (0). We apply the following cross-entropy loss to the path attention:"
],
[
"We aggregated crowdsourced annotations obtained in Section SECREF3. As a preprocessing, we converted the NLD annotation to Unsure if the derivation contains the phrase needs to be mentioned. This is due to the fact that annotators misunderstand our instruction. When at least one crowdworker state that a statement is Unsure, then we set the answerability to Unanswerable and discard NLD annotations. Otherwise, we employ all NLD annotations from workers as multiple reference NLDs. The statistics is shown in Table TABREF36.",
"Regarding $\\mathcal {K}^+, \\mathcal {K}^-$, we extracted 867,936 instances from the training set of WikiHop BIBREF0. We reserve 10% of these instances as a validation set to find the best model. For $\\mathcal {D}$, we used Answerable questions in the training set. To create supervision of path (i.e. $\\mathbf {p}_i$), we selected the path that is most similar to all NLD annotations in terms of ROUGE-L F1."
],
[
"We used 100-dimensional vectors for entities, relations, and textual relation representations. We initialize these representations with 100-dimensional Glove Embeddings BIBREF14 and fine-tuned them during training. We retain only top-100,000 frequent words as a model vocabulary. We used Bi-LSTM with 50 dimensional hidden state as a textual relation encoder, and an LSTM with 100-dimensional hidden state as an entity encoder. We used the Adam optimizer (default parameters) BIBREF15 with a batch size of 32. We set the answerability threshold $\\epsilon _k = 0.5$."
],
[
"To check the integrity of the PRKGC model, we created a simple baseline model (shortest path model). It outputs a candidate entity with the shortest path length from a query entity on $G(S)$ as an answer. Similarly to the PRKGC model, it traverses the path to generate NLDs. It outputs Unanswerable if (i) a query entity is not reachable to any candidate entities on $G(S)$ or (ii) the shortest path length is more than 3."
],
[
"As shown in Table TABREF37, the PRKGC models learned to reason over more than simple shortest paths. Yet, the PRKGC model do not give considerably good results, which indicates the non-triviality of RC-QED$^{\\rm E}$. Although the PRKGC model do not receive supervision about human-generated NLDs, paths with the maximum score match human-generated NLDs to some extent.",
"Supervising path attentions (the PRKGC+NS model) is indeed effective for improving the human interpretability of generated NLDs. It also improves the generalization ability of question answering. We speculate that $L_d$ functions as a regularizer, which helps models to learn reasoning that helpful beyond training data. This observation is consistent with previous work where an evidence selection task is learned jointly with a main task BIBREF11, BIBREF2, BIBREF5.",
"As shown in Table TABREF43, as the required derivation step increases, the PRKGC+NS model suffers from predicting answer entities and generating correct NLDs. This indicates that the challenge of RC-QED$^{\\rm E}$ is in how to extract relevant information from supporting documents and synthesize these multiple facts to derive an answer.",
"To obtain further insights, we manually analyzed generated NLDs. Table TABREF44 (a) illustrates a positive example, where the model identifies that altudoceras belongs to pseudogastrioceratinae, and that pseudogastrioceratinae is a subfamily of paragastrioceratidae. Some supporting sentences are already similar to human-generated NLDs, thus simply extracting textual relations works well for some problems.",
"On the other hand, typical derivation error is from non-human readable textual relations. In (b), the model states that bumped has a relationship of “,” with hands up, which is originally extracted from one of supporting sentences It contains the UK Top 60 singles “Bumped”, “Hands Up (4 Lovers)” and .... This provides a useful clue for answer prediction, but is not suitable as a derivation. One may address this issue by incorporating, for example, a relation extractor or a paraphrasing mechanism using recent advances of conditional language models BIBREF20."
],
[
"To check the integrity of our baseline models, we compare our baseline models with existing neural models tailored for QA under the pure WikiHop setting (i.e. evaluation with only an accuracy of predicted answers). Note that these existing models do not output derivations. We thus cannot make a direct comparison, so it servers as a reference purpose. Because WikiHop has no answerability task, we enforced the PRKGC model to always output answers. As shown in Table TABREF45, the PRKGC models achieve a comparable performance to other sophisticated neural models."
],
[
"There exists few RC datasets annotated with explanations (Table TABREF50). The most similar work to ours is Science QA dataset BIBREF21, BIBREF22, BIBREF23, which provides a small set of NLDs annotated for analysis purposes. By developing the scalable crowdsourcing framework, our work provides one order-of-magnitude larger NLDs which can be used as a benchmark more reliably. In addition, it provides the community with new types of challenges not included in HotpotQA."
],
[
"There is a large body of work on analyzing the nature of RC datasets, motivated by the question to what degree RC models understand natural language BIBREF3, BIBREF4. Several studies suggest that current RC datasets have unintended bias, which enables RC systems to rely on a cheap heuristics to answer questions. For instance, Sugawara2018 show that some of these RC datasets contain a large number of “easy” questions that can be solved by a cheap heuristics (e.g. by looking at a first few tokens of questions). Responding to their findings, we take a step further and explore the new task of RC that requires RC systems to give introspective explanations as well as answers. In addition, recent studies show that current RC models and NLP models are vulnerable to adversarial examples BIBREF29, BIBREF30, BIBREF31. Explicit modeling of NLDs is expected to reguralize RC models, which could prevent RC models' strong dependence on unintended bias in training data (e.g. annotation artifact) BIBREF32, BIBREF8, BIBREF2, BIBREF5, as partially confirmed in Section SECREF46."
],
[
"There are existing NLP tasks that require models to output explanations (Table TABREF50). FEVER BIBREF25 requires a system to judge the “factness” of a claim as well as to identify justification sentences. As discussed earlier, we take a step further from justification explanations to provide new challenges for NLU.",
"Several datasets are annotated with introspective explanations, ranging from textual entailments BIBREF8 to argumentative texts BIBREF26, BIBREF27, BIBREF33. All these datasets offer the classification task of single sentences or sentence pairs. The uniqueness of our dataset is that it measures a machine's ability to extract relevant information from a set of documents and to build coherent logical reasoning steps."
],
[
"Towards RC models that can perform correct reasoning, we have proposed RC-QED that requires a system to output its introspective explanations, as well as answers. Instantiating RC-QED with entity-based multi-hop QA (RC-QED$^{\\rm E}$), we have created a large-scale corpus of NLDs. The developed crowdsourcing annotation framework can be used for annotating other QA datasets with derivations. Our experiments using two simple baseline models have demonstrated that RC-QED$^{\\rm E}$ is a non-trivial task, and that it indeed provides a challenging task of extracting and synthesizing relevant facts from supporting documents. We will make the corpus of reasoning annotations and baseline systems publicly available at https://naoya-i.github.io/rc-qed/.",
"One immediate future work is to expand the annotation to non-entity-based multi-hop QA datasets such as HotpotQA BIBREF2. For modeling, we plan to incorporate a generative mechanism based on recent advances in conditional language modeling."
],
[
"Table TABREF53 shows examples of crowdsourced annotations."
]
]
} | {
"question": [
"What is the baseline?",
"What dataset was used in the experiment?",
"Did they use any crowdsourcing platform?",
"How was the dataset annotated?",
"What is the source of the proposed dataset?"
],
"question_id": [
"b11ee27f3de7dd4a76a1f158dc13c2331af37d9f",
"7aba5e4483293f5847caad144ee0791c77164917",
"565d668947ffa6d52dad019af79289420505889b",
"d83304c70fe66ae72e78aa1d183e9f18b7484cd6",
"e90ac9ee085dc2a9b6fe132245302bbce5f3f5ab"
],
"nlp_background": [
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
"",
""
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" path ranking-based KGC (PRKGC)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"RC-QED$^{\\rm E}$ can be naturally solved by path ranking-based KGC (PRKGC), where the query triplet and the sampled paths correspond to a question and derivation steps, respectively. PRKGC meets our purposes because of its glassboxness: we can trace the derivation steps of the model easily."
],
"highlighted_evidence": [
"RC-QED$^{\\rm E}$ can be naturally solved by path ranking-based KGC (PRKGC), where the query triplet and the sampled paths correspond to a question and derivation steps, respectively."
]
}
],
"annotation_id": [
"176a0eb62fad330484640a25897d3eb104448762"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"WikiHop"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Our study uses WikiHop BIBREF0, as it is an entity-based multi-hop QA dataset and has been actively used. We randomly sampled 10,000 instances from 43,738 training instances and 2,000 instances from 5,129 validation instances (i.e. 36,000 annotation tasks were published on AMT). We manually converted structured WikiHop question-answer pairs (e.g. locatedIn(Macchu Picchu, Peru)) into natural language statements (Macchu Picchu is located in Peru) using a simple conversion dictionary."
],
"highlighted_evidence": [
"Our study uses WikiHop BIBREF0, as it is an entity-based multi-hop QA dataset and has been actively used."
]
}
],
"annotation_id": [
"1b17dac7e1344b004d563757e365d821001f8d95"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"We deployed the task on Amazon Mechanical Turk (AMT). To see how reasoning varies across workers, we hire 3 crowdworkers per one instance. We hire reliable crowdworkers with $\\ge 5,000$ HITs experiences and an approval rate of $\\ge $ 99.0%, and pay ¢20 as a reward per instance."
],
"highlighted_evidence": [
"We deployed the task on Amazon Mechanical Turk (AMT). To see how reasoning varies across workers, we hire 3 crowdworkers per one instance. We hire reliable crowdworkers with $\\ge 5,000$ HITs experiences and an approval rate of $\\ge $ 99.0%, and pay ¢20 as a reward per instance."
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"We deployed the task on Amazon Mechanical Turk (AMT). To see how reasoning varies across workers, we hire 3 crowdworkers per one instance. We hire reliable crowdworkers with $\\ge 5,000$ HITs experiences and an approval rate of $\\ge $ 99.0%, and pay ¢20 as a reward per instance."
],
"highlighted_evidence": [
"We deployed the task on Amazon Mechanical Turk (AMT)"
]
}
],
"annotation_id": [
"4515083597c34540f822f301af48e426a572d54c",
"e1c9016cbf47eac0d69fb565268857319a9944e6"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"True, Likely (i.e. Answerable), or Unsure (i.e. Unanswerable)",
"why they are unsure from two choices (“Not stated in the article” or “Other”)",
"The “summary” text boxes"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Given a statement and articles, workers are asked to judge whether the statement can be derived from the articles at three grades: True, Likely (i.e. Answerable), or Unsure (i.e. Unanswerable). If a worker selects Unsure, we ask workers to tell us why they are unsure from two choices (“Not stated in the article” or “Other”).",
"If a worker selects True or Likely in the judgement task, we first ask which sentences in the given articles are justification explanations for a given statement, similarly to HotpotQA BIBREF2. The “summary” text boxes (i.e. NLDs) are then initialized with these selected sentences. We give a ¢6 bonus to those workers who select True or Likely. To encourage an abstraction of selected sentences, we also introduce a gamification scheme to give a bonus to those who provide shorter NLDs. Specifically, we probabilistically give another ¢14 bonus to workers according to a score they gain. The score is always shown on top of the screen, and changes according to the length of NLDs they write in real time. To discourage noisy annotations, we also warn crowdworkers that their work would be rejected for noisy submissions. We periodically run simple filtering to exclude noisy crowdworkers (e.g. workers who give more than 50 submissions with the same answers)."
],
"highlighted_evidence": [
"Given a statement and articles, workers are asked to judge whether the statement can be derived from the articles at three grades: True, Likely (i.e. Answerable), or Unsure (i.e. Unanswerable). If a worker selects Unsure, we ask workers to tell us why they are unsure from two choices (“Not stated in the article” or “Other”).",
"If a worker selects True or Likely in the judgement task, we first ask which sentences in the given articles are justification explanations for a given statement, similarly to HotpotQA BIBREF2. The “summary” text boxes (i.e. NLDs) are then initialized with these selected sentences."
]
}
],
"annotation_id": [
"2dbb8ec4b5ae9fd4b8e4a092bb0c2c5b3ae76c6c"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"ac92d63de33314c18dd45dec5e2a7032a3cd44a6"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Figure 1: Overview of the proposed RC-QED task. Given a question and supporting documents, a system is required to give an answer and its derivation steps.",
"Figure 2: Crowdsourcing interface: judgement task.",
"Figure 3: Crowdsourcing interface: derivation task.",
"Table 1: Distribution of worker responses and example responses of NLDs.",
"Table 2: Ratings of annotated NLDs by human judges.",
"Figure 4: Overview of the PRKGC model, exemplifying calculation of P (releasedBy|Barracuda, PortraitRecords) with G(S) constructed from Figure 1. Each node represents named entities in supporting documents S, and each edge represents textual relations extracted from S.",
"Table 3: Statistics of dataset after aggregation.",
"Table 4: Performance of RC-QEDE of our baseline models (see Section 2.1 for further details of each evaluation metrics). “NS” indicates the use of annotated NLDs as supervision (i.e. using Ld during training).",
"Table 5: Performance breakdown of the PRKGC+NS model. Derivation Precision denotes ROUGE-L F1 of generated NLDs.",
"Table 6: Example of PRKGC model’s derivation steps and answer prediction. For readability, entities are highlighted as blue (best viewed in color).",
"Table 7: Accuracy of our baseline models and previous work on WikiHop (Welbl et al., 2018)’s development set. Note that our baseline models are explainable, whereas the others are not. “NS” indicates the use of annotated NLDs as supervision. Accuracies of existing models are taken from the papers.",
"Table 8: Comparison to other corpora annotated with justification (JS.) or introspective explanations (IN.)."
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"3-Figure3-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"5-Figure4-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"7-Table5-1.png",
"8-Table6-1.png",
"8-Table7-1.png",
"8-Table8-1.png"
]
} |
1912.05066 | Event Outcome Prediction using Sentiment Analysis and Crowd Wisdom in Microblog Feeds | Sentiment Analysis of microblog feeds has attracted considerable interest in recent times. Most of the current work focuses on tweet sentiment classification. But not much work has been done to explore how reliable the opinions of the mass (crowd wisdom) in social network microblogs such as twitter are in predicting outcomes of certain events such as election debates. In this work, we investigate whether crowd wisdom is useful in predicting such outcomes and whether their opinions are influenced by the experts in the field. We work in the domain of multi-label classification to perform sentiment classification of tweets and obtain the opinion of the crowd. This learnt sentiment is then used to predict outcomes of events such as: US Presidential Debate winners, Grammy Award winners, Super Bowl Winners. We find that in most of the cases, the wisdom of the crowd does indeed match with that of the experts, and in cases where they don't (particularly in the case of debates), we see that the crowd's opinion is actually influenced by that of the experts. | {
"section_name": [
"Introduction",
"Related Work",
"Data Set and Preprocessing ::: Data Collection",
"Data Set and Preprocessing ::: Preprocessing",
"Methodology ::: Procedure",
"Methodology ::: Machine Learning Models",
"Methodology ::: Machine Learning Models ::: Single-label Classification",
"Methodology ::: Machine Learning Models ::: Multi-label Classification",
"Methodology ::: Feature Space",
"Data Analysis",
"Evaluation Metrics",
"Evaluation Metrics ::: Sentiment Analysis",
"Evaluation Metrics ::: Outcome Prediction",
"Results ::: Sentiment Analysis",
"Results ::: Results for Outcome Prediction",
"Results ::: Results for Outcome Prediction ::: Presidential Debates",
"Results ::: Results for Outcome Prediction ::: Grammy Awards",
"Results ::: Results for Outcome Prediction ::: Super Bowl",
"Conclusions"
],
"paragraphs": [
[
"Over the past few years, microblogs have become one of the most popular online social networks. Microblogging websites have evolved to become a source of varied kinds of information. This is due to the nature of microblogs: people post real-time messages about their opinions and express sentiment on a variety of topics, discuss current issues, complain, etc. Twitter is one such popular microblogging service where users create status messages (called “tweets\"). With over 400 million tweets per day on Twitter, microblog users generate large amount of data, which cover very rich topics ranging from politics, sports to celebrity gossip. Because the user generated content on microblogs covers rich topics and expresses sentiment/opinions of the mass, mining and analyzing this information can prove to be very beneficial both to the industrial and the academic community. Tweet classification has attracted considerable attention because it has become very important to analyze peoples' sentiments and opinions over social networks.",
"Most of the current work on analysis of tweets is focused on sentiment analysis BIBREF0, BIBREF1, BIBREF2. Not much has been done on predicting outcomes of events based on the tweet sentiments, for example, predicting winners of presidential debates based on the tweets by analyzing the users' sentiments. This is possible intuitively because the sentiment of the users in their tweets towards the candidates is proportional to the performance of the candidates in the debate.",
"In this paper, we analyze three such events: 1) US Presidential Debates 2015-16, 2) Grammy Awards 2013, and 3) Super Bowl 2013. The main focus is on the analysis of the presidential debates. For the Grammys and the Super Bowl, we just perform sentiment analysis and try to predict the outcomes in the process. For the debates, in addition to the analysis done for the Grammys and Super Bowl, we also perform a trend analysis. Our analysis of the tweets for the debates is 3-fold as shown below.",
"Sentiment: Perform a sentiment analysis on the debates. This involves: building a machine learning model which learns the sentiment-candidate pair (candidate is the one to whom the tweet is being directed) from the training data and then using this model to predict the sentiment-candidate pairs of new tweets.",
"Predicting Outcome: Here, after predicting the sentiment-candidate pairs on the new data, we predict the winner of the debates based on the sentiments of the users.",
"Trends: Here, we analyze certain trends of the debates like the change in sentiments of the users towards the candidates over time (hours, days, months) and how the opinion of experts such as Washington Post affect the sentiments of the users.",
"For the sentiment analysis, we look at our problem in a multi-label setting, our two labels being sentiment polarity and the candidate/category in consideration. We test both single-label classifiers and multi-label ones on the problem and as intuition suggests, the multi-label classifier RaKel performs better. A combination of document-embedding features BIBREF3 and topic features (essentially the document-topic probabilities) BIBREF4 is shown to give the best results. These features make sense intuitively because the document-embedding features take context of the text into account, which is important for sentiment polarity classification, and topic features take into account the topic of the tweet (who/what is it about).",
"The prediction of outcomes of debates is very interesting in our case. Most of the results seem to match with the views of some experts such as the political pundits of the Washington Post. This implies that certain rules that were used to score the candidates in the debates by said-experts were in fact reflected by reading peoples' sentiments expressed over social media. This opens up a wide variety of learning possibilities from users' sentiments on social media, which is sometimes referred to as the wisdom of crowd.",
"We do find out that the public sentiments are not always coincident with the views of the experts. In this case, it is interesting to check whether the views of the experts can affect the public, for example, by spreading through the social media microblogs such as Twitter. Hence, we also conduct experiments to compare the public sentiment before and after the experts' views become public and thus notice the impact of the experts' views on the public sentiment. In our analysis of the debates, we observe that in certain debates, such as the 5th Republican Debate, held on December 15, 2015, the opinions of the users vary from the experts. But we see the effect of the experts on the sentiment of the users by looking at their opinions of the same candidates the next day.",
"Our contributions are mainly: we want to see how predictive the sentiment/opinion of the users are in social media microblogs and how it compares to that of the experts. In essence, we find that the crowd wisdom in the microblog domain matches that of the experts in most cases. There are cases, however, where they don't match but we observe that the crowd's sentiments are actually affected by the experts. This can be seen in our analysis of the presidential debates.",
"The rest of the paper is organized as follows: in section SECREF2, we review some of the literature. In section SECREF3, we discuss the collection and preprocessing of the data. Section SECREF4 details the approach taken, along with the features and the machine learning methods used. Section SECREF7 discusses the results of the experiments conducted and lastly section SECREF8 ends with a conclusion on the results including certain limitations and scopes for improvement to work on in the future."
],
[
"Sentiment analysis as a Natural Language Processing task has been handled at many levels of granularity. Specifically on the microblog front, some of the early results on sentiment analysis are by BIBREF0, BIBREF1, BIBREF2, BIBREF5, BIBREF6. Go et al. BIBREF0 applied distant supervision to classify tweet sentiment by using emoticons as noisy labels. Kouloumpis et al. BIBREF7 exploited hashtags in tweets to build training data. Chenhao Tan et al. BIBREF8 determined user-level sentiments on particular topics with the help of the social network graph.",
"There has been some work in event detection and extraction in microblogs as well. In BIBREF9, the authors describe a way to extract major life events of a user based on tweets that either congratulate/offer condolences. BIBREF10 build a key-word graph from the data and then detect communities in this graph (cluster) to find events. This works because words that describe similar events will form clusters. In BIBREF11, the authors use distant supervision to extract events. There has also been some work on event retrieval in microblogs BIBREF12. In BIBREF13, the authors detect time points in the twitter stream when an important event happens and then classify such events based on the sentiments they evoke using only non-textual features to do so. In BIBREF14, the authors study how much of the opinion extracted from Online Social Networks (OSN) user data is reflective of the opinion of the larger population. Researchers have also mined Twitter dataset to analyze public reaction to various events: from election debate performance BIBREF15, where the authors demonstrate visuals and metrics that can be used to detect sentiment pulse, anomalies in that pulse, and indications of controversial topics that can be used to inform the design of visual analytic systems for social media events, to movie box-office predictions on the release day BIBREF16. Mishne and Glance BIBREF17 correlate sentiments in blog posts with movie box-office scores. The correlations they observed for positive sentiments are fairly low and not sufficient to use for predictive purposes. Recently, several approaches involving machine learning and deep learning have also been used in the visual and language domains BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24."
],
[
"Twitter is a social networking and microblogging service that allows users to post real-time messages, called tweets. Tweets are very short messages, a maximum of 140 characters in length. Due to such a restriction in length, people tend to use a lot of acronyms, shorten words etc. In essence, the tweets are usually very noisy. There are several aspects to tweets such as: 1) Target: Users use the symbol “@\" in their tweets to refer to other users on the microblog. 2) Hashtag: Hashtags are used by users to mark topics. This is done to increase the visibility of the tweets.",
"We conduct experiments on 3 different datasets, as mentioned earlier: 1) US Presidential Debates, 2) Grammy Awards 2013, 3) Superbowl 2013. To construct our presidential debates dataset, we have used the Twitter Search API to collect the tweets. Since there was no publicly available dataset for this, we had to collect the data manually. The data was collected on 10 different presidential debates: 7 republican and 3 democratic, from October 2015 to March 2016. Different hashtags like “#GOP, #GOPDebate” were used to filter out tweets specific to the debate. This is given in Table TABREF2. We extracted only english tweets for our dataset. We collected a total of 104961 tweets were collected across all the debates. But there were some limitations with the API. Firstly, the server imposes a rate limit and discards tweets when the limit is reached. The second problem is that the API returns many duplicates. Thus, after removing the duplicates and irrelevant tweets, we were left with a total of 17304 tweets. This includes the tweets only on the day of the debate. We also collected tweets on the days following the debate.",
"As for the other two datasets, we collected them from available-online repositories. There were a total of 2580062 tweets for the Grammy Awards 2013, and a total of 2428391 tweets for the Superbowl 2013. The statistics are given in Tables TABREF3 and TABREF3. The tweets for the grammy were before the ceremony and during. However, we only use the tweets before the ceremony (after the nominations were announced), to predict the winners. As for the superbowl, the tweets collected were during the game. But we can predict interesting things like Most Valuable Player etc. from the tweets. The tweets for both of these datasets were annotated and thus did not require any human intervention. However, the tweets for the debates had to be annotated.",
"Since we are using a supervised approach in this paper, we have all the tweets (for debates) in the training set human-annotated. The tweets were already annotated for the Grammys and Super Bowl. Some statistics about our datasets are presented in Tables TABREF3, TABREF3 and TABREF3. The annotations for the debate dataset comprised of 2 labels for each tweet: 1) Candidate: This is the candidate of the debate to whom the tweet refers to, 2) Sentiment: This represents the sentiment of the tweet towards that candidate. This is either positive or negative.",
"The task then becomes a case of multi-label classification. The candidate labels are not so trivial to obtain, because there are tweets that do not directly contain any candidates' name. For example, the tweets, “a business man for president??” and “a doctor might sure bring about a change in America!” are about Donal Trump and Ben Carson respectively. Thus, it is meaningful to have a multi-label task.",
"The annotations for the other two datasets are similar, in that one of the labels was the sentiment and the other was category-dependent in the outcome-prediction task, as discussed in the sections below. For example, if we are trying to predict the \"Album of the Year\" winners for the Grammy dataset, the second label would be the nominees for that category (album of the year)."
],
[
"As noted earlier, tweets are generally noisy and thus require some preprocessing done before using them. Several filters were applied to the tweets such as: (1) Usernames: Since users often include usernames in their tweets to direct their message, we simplify it by replacing the usernames with the token “USER”. For example, @michael will be replaced by USER. (2) URLs: In most of the tweets, users include links that add on to their text message. We convert/replace the link address to the token “URL”. (3) Repeated Letters: Oftentimes, users use repeated letters in a word to emphasize their notion. For example, the word “lol” (which stands for “laugh out loud”) is sometimes written as “looooool” to emphasize the degree of funnyness. We replace such repeated occurrences of letters (more than 2), with just 3 occurrences. We replace with 3 occurrences and not 2, so that we can distinguish the exaggerated usage from the regular ones. (4) Multiple Sentiments: Tweets which contain multiple sentiments are removed, such as \"I hate Donald Trump, but I will vote for him\". This is done so that there is no ambiguity. (5) Retweets: In Twitter, many times tweets of a person are copied and posted by another user. This is known as retweeting and they are commonly abbreviated with “RT”. These are removed and only the original tweets are processed. (6) Repeated Tweets: The Twitter API sometimes returns a tweet multiple times. We remove such duplicates to avoid putting extra weight on any particular tweet."
],
[
"Our analysis of the debates is 3-fold including sentiment analysis, outcome prediction, and trend analysis.",
"Sentiment Analysis: To perform a sentiment analysis on the debates, we first extract all the features described below from all the tweets in the training data. We then build the different machine learning models described below on these set of features. After that, we evaluate the output produced by the models on unseen test data. The models essentially predict candidate-sentiment pairs for each tweet.",
"Outcome Prediction: Predict the outcome of the debates. After obtaining the sentiments on the test data for each tweet, we can compute the net normalized sentiment for each presidential candidate in the debate. This is done by looking at the number of positive and negative sentiments for each candidate. We then normalize the sentiment scores of each candidate to be in the same scale (0-1). After that, we rank the candidates based on the sentiment scores and predict the top $k$ as the winners.",
"Trend Analysis: We also analyze some certain trends of the debates. Firstly, we look at the change in sentiments of the users towards the candidates over time (hours, days, months). This is done by computing the sentiment scores for each candidate in each of the debates and seeing how it varies over time, across debates. Secondly, we examine the effect of Washington Post on the views of the users. This is done by looking at the sentiments of the candidates (to predict winners) of a debate before and after the winners are announced by the experts in Washington Post. This way, we can see if Washington Post has had any effect on the sentiments of the users. Besides that, to study the behavior of the users, we also look at the correlation of the tweet volume with the number of viewers as well as the variation of tweet volume over time (hours, days, months) for debates.",
"As for the Grammys and the Super Bowl, we only perform the sentiment analysis and predict the outcomes."
],
[
"We compare 4 different models for performing our task of sentiment classification. We then pick the best performing model for the task of outcome prediction. Here, we have two categories of algorithms: single-label and multi-label (We already discussed above why it is meaningful to have a multi-label task earlier), because one can represent $<$candidate, sentiment$>$ as a single class label or have candidate and sentiment as two separate labels. They are listed below:"
],
[
"Naive Bayes: We use a multinomial Naive Bayes model. A tweet $t$ is assigned a class $c^{*}$ such that",
"where there are $m$ features and $f_i$ represents the $i^{th}$ feature.",
"Support Vector Machines: Support Vector Machines (SVM) constructs a hyperplane or a set of hyperplanes in a high-dimensional space, which can then be used for classification. In our case, we use SVM with Sequential Minimal Optimization (SMO) BIBREF25, which is an algorithm for solving the quadratic programming (QP) problem that arises during the training of SVMs.",
"Elman Recurrent Neural Network: Recurrent Neural Networks (RNNs) are gaining popularity and are being applied to a wide variety of problems. They are a class of artificial neural networks, where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. The Elman RNN was proposed by Jeff Elman in the year 1990 BIBREF26. We use this in our task."
],
[
"RAkEL (RAndom k labELsets): RAkEL BIBREF27 is a multi-label classification algorithm that uses labeled powerset (LP) transformation: it basically creates a single binary classifier for every label combination and then uses multiple LP classifiers, each trained on a random subset of the actual labels, for classification."
],
[
"In order to classify the tweets, a set of features is extracted from each of the tweets, such as n-gram, part-of-speech etc. The details of these features are given below:",
"n-gram: This represents the frequency counts of n-grams, specifically that of unigrams and bigrams.",
"punctuation: The number of occurrences of punctuation symbols such as commas, exclamation marks etc.",
"POS (part-of-speech): The frequency of each POS tagger is used as the feature.",
"prior polarity scoring: Here, we obtain the prior polarity of the words BIBREF6 using the Dictionary of Affect in Language (DAL) BIBREF28. This dictionary (DAL) of about 8000 English words assigns a pleasantness score to each word on a scale of 1-3. After normalizing, we can assign the words with polarity higher than $0.8$ as positive and less than $0.5$ as negative. If a word is not present in the dictionary, we lookup its synonyms in WordNet: if this word is there in the dictionary, we assign the original word its synonym's score.",
"Twitter Specific features:",
"Number of hashtags ($\\#$ symbol)",
"Number of mentioning users ( symbol)",
"Number of hyperlinks",
"Document embedding features: Here, we use the approach proposed by Mikolov et al. BIBREF3 to embed an entire tweet into a vector of features",
"Topic features: Here, LDA (Latent Dirichlet Allocation) BIBREF4 is used to extract topic-specific features for a tweet (document). This is basically the topic-document probability that is outputted by the model.",
"In the following experiments, we use 1-$gram$, 2-$gram$ and $(1+2)$-$gram$ to denote unigram, bigram and a combination of unigram and bigram features respectively. We also combine punctuation and the other features as miscellaneous features and use $MISC$ to denote this. We represent the document-embedding features by $DOC$ and topic-specific features by $TOPIC$."
],
[
"In this section, we analyze the presidential debates data and show some trends.",
"First, we look at the trend of the tweet frequency. Figure FIGREF21 shows the trends of the tweet frequency and the number of TV viewers as the debates progress over time. We observe from Figures FIGREF21 and FIGREF21 that for the first 5 debates considered, the trend of the number of TV viewers matches the trend of the number of tweets. However, we can see that towards the final debates, the frequency of the tweets decreases consistently. This shows an interesting fact that although the people still watch the debates, the number of people who tweet about it are greatly reduced. But the tweeting community are mainly youngsters and this shows that most of the tweeting community, who actively tweet, didn't watch the later debates. Because if they did, then the trends should ideally match.",
"Next we look at how the tweeting activity is on days of the debate: specifically on the day of the debate, the next day and two days later. Figure FIGREF22 shows the trend of the tweet frequency around the day of the 5th republican debate, i.e December 15, 2015. As can be seen, the average number of people tweet more on the day of the debate than a day or two after it. This makes sense intuitively because the debate would be fresh in their heads.",
"Then, we look at how people tweet in the hours of the debate: specifically during the debate, one hour after and then two hours after. Figure FIGREF23 shows the trend of the tweet frequency around the hour of the 5th republican debate, i.e December 15, 2015. We notice that people don't tweet much during the debate but the activity drastically increases after two hours. This might be because people were busy watching the debate and then taking some time to process things, so that they can give their opinion.",
"We have seen the frequency of tweets over time in the previous trends. Now, we will look at how the sentiments of the candidates change over time.",
"First, Figure FIGREF24 shows how the sentiments of the candidates changed across the debates. We find that many of the candidates have had ups and downs towards in the debates. But these trends are interesting in that, it gives some useful information about what went down in the debate that caused the sentiments to change (sometimes drastically). For example, if we look at the graph for Donald Trump, we see that his sentiment was at its lowest point during the debate held on December 15. Looking into the debate, we can easily see why this was the case. At a certain point in the debate, Trump was asked about his ideas for the nuclear triad. It is very important that a presidential candidate knows about this, but Trump had no idea what the nuclear triad was and, in a transparent attempt to cover his tracks, resorted to a “we need to be strong\" speech. It can be due to this embarrassment that his sentiment went down during this debate.",
"Next, we investigate how the sentiments of the users towards the candidates change before and after the debate. In essence, we examine how the debate and the results of the debates given by the experts affects the sentiment of the candidates. Figure FIGREF25 shows the sentiments of the users towards the candidate during the 5th Republican Debate, 15th December 2015. It can be seen that the sentiments of the users towards the candidates does indeed change over the course of two days. One particular example is that of Jeb Bush. It seems that the populace are generally prejudiced towards the candidates, which is reflected in their sentiments of the candidates on the day of the debate. The results of the Washington Post are released in the morning after the debate. One can see the winners suggested by the Washington Post in Table TABREF35. One of the winners in that debate according to them is Jeb Bush. Coincidentally, Figure FIGREF25 suggests that the sentiment of Bush has gone up one day after the debate (essentially, one day after the results given by the experts are out).",
"There is some influence, for better or worse, of these experts on the minds of the users in the early debates, but towards the final debates the sentiments of the users are mostly unwavering, as can be seen in Figure FIGREF25. Figure FIGREF25 is on the last Republican debate, and suggests that the opinions of the users do not change much with time. Essentially the users have seen enough debates to make up their own minds and their sentiments are not easily wavered."
],
[
"In this section, we define the different evaluation metrics that we use for different tasks. We have two tasks at hand: 1) Sentiment Analysis, 2) Outcome Prediction. We use different metrics for these two tasks."
],
[
"In the study of sentiment analysis, we use “Hamming Loss” to evaluate the performance of the different methods. Hamming Loss, based on Hamming distance, takes into account the prediction error and the missing error, normalized over the total number of classes and total number of examples BIBREF29. The Hamming Loss is given below:",
"where $|D|$ is the number of examples in the dataset and $|L|$ is the number of labels. $S_i$ and $Y_i$ denote the sets of true and predicted labels for instance $i$ respectively. $\\oplus $ stands for the XOR operation BIBREF30. Intuitively, the performance is better, when the Hamming Loss is smaller. 0 would be the ideal case."
],
[
"For the case of outcome prediction, we will have a predicted set and an actual set of results. Thus, we can use common information retrieval metrics to evaluate the prediction performance. Those metrics are listed below:",
"Mean F-measure: F-measure takes into account both the precision and recall of the results. In essence, it takes into account how many of the relevant results are returned and also how many of the returned results are relevant.",
"where $|D|$ is the number of queries (debates/categories for grammy winners etc.), $P_i$ and $R_i$ are the precision and recall for the $i^{th}$ query.",
"Mean Average Precision: As a standard metric used in information retrieval, Mean Average Precision for a set of queries is mean of the average precision scores for each query:",
"where $|D|$ is the number of queries (e.g., debates), $P_i(k)$ is the precision at $k$ ($P@k$) for the $i^{th}$ query, $rel_i(k)$ is an indicator function, equaling 1 if the document at position $k$ for the $i^th$ query is relevant, else 0, and $|RD_i|$ is the number of relevant documents for the $i^{th}$ query."
],
[
"We use 3 different datasets for the problem of sentiment analysis, as already mentioned. We test the four different algorithms mentioned in Section SECREF6, with a different combination of features that are described in Section SECREF10. To evaluate our models, we use the “Hamming Loss” metric as discussed in Section SECREF6. We use this metric because our problem is in the multi-class classification domain. However, the single-label classifiers like SVM, Naive Bayes, Elman RNN cannot be evaluated against this metric directly. To do this, we split the predicted labels into a format that is consistent with that of multi-label classifiers like RaKel. The results of the experiments for each of the datasets are given in Tables TABREF34, TABREF34 and TABREF34. In the table, $f_1$, $f_2$, $f_3$, $f_4$, $f_5$ and $f_6$ denote the features 1-$gram$, 2-$gram$, $(1+2)$-$gram$, $(1+2)$-$gram + MISC$, $DOC$ and $DOC + TOPIC$ respectively. Note that lower values of hamming losses are more desirable. We find that RaKel performs the best out of all the algorithms. RaKel is more suited for the task because our task is a multi-class classification. Among all the single-label classifiers, SVM performs the best. We also observe that as we use more complex feature spaces, the performance increases. This is true for almost all of the algorithms listed.",
"Our best performing features is a combination of paragraph embedding features and topic features from LDA. This makes sense intuitively because paragraph-embedding takes into account the context in the text. Context is very important in determining the sentiment of a short text: having negative words in the text does not always mean that the text contains a negative sentiment. For example, the sentence “never say never is not a bad thing” has many negative words; but the sentence as a whole does not have a negative sentiment. This is why we need some kind of context information to accurately determine the sentiment. Thus, with these embedded features, one would be able to better determine the polarity of the sentence. The other label is the entity (candidate/song etc.) in consideration. Topic features here make sense because this can be considered as the topic of the tweet in some sense. This can be done because that label captures what or whom the tweet is about."
],
[
"In this section, we show the results for the outcome-prediction of the events. RaKel, as the best performing method, is trained to predict the sentiment-labels for the unlabeled data. The sentiment labels are then used to determine the outcome of the events. In the Tables (TABREF35, TABREF36, TABREF37) of outputs given, we only show as many predictions as there are winners."
],
[
"The results obtained for the outcome prediction task for the US presidential debates is shown in Table TABREF35. Table TABREF35 shows the winners as given in the Washington Post (3rd column) and the winners that are predicted by our system (2nd column). By comparing the set of results obtained from both the sources, we find that the set of candidates predicted match to a large extent with the winners given out by the Washington Post. The result suggests that the opinions of the social media community match with that of the journalists in most cases."
],
[
"A Grammy Award is given to outstanding achievement in the music industry. There are two types of awards: “General Field” awards, which are not restricted by genre, and genre-specific awards. Since, there can be upto 80 categories of awards, we only focus on the main 4: 1) Album of the Year, 2) Record of the Year, 3) Song of the Year, and 4) Best New Artist. These categories are the main in the awards ceremony and most looked forward to. That is also why we choose to predict the outcomes of these categories based on the tweets. We use the tweets before the ceremony (but after the nominations) to predict the outcomes.",
"Basically, we have a list of nominations for each category. We filter the tweets based on these nominations and then predict the winner as with the case of the debates. The outcomes are listed in Table TABREF36. We see that largely, the opinion of the users on the social network, agree with the deciding committee of the awards. The winners agree for all the categories except “Song of the Year”."
],
[
"The Super Bowl is the annual championship game of the National Football League. We have collected the data for the year 2013. Here, the match was between the Baltimore Ravens and the San Francisco 49ers. The tweets that we have collected are during the game. From these tweets, one could trivially determine the winner. But an interesting outcome would be to predict the Most Valuable Player (MVP) during the game. To determine this, all the tweets were looked at and we determined the candidate with the highest positive sentiment by the end of the game. The result in Table TABREF37 suggests that we are able to determine the outcomes accurately.",
"Table TABREF43 displays some evaluation metrics for this task. These were computed based on the predicted outcomes and the actual outcomes for each of the different datasets. Since the number of participants varies from debate-to-debate or category-to-category for Grammy etc., we cannot return a fixed number of winners for everything. So, the size of our returned ranked-list is set to half of the number of participants (except for the MVP for Super Bowl; there are so many players and returning half of them when only one of them is relevant is meaningless. So, we just return the top 10 players). As we can see from the metrics, the predicted outcomes match quite well with the actual ones (or the ones given by the experts)."
],
[
"This paper presents a study that compares the opinions of users on microblogs, which is essentially the crowd wisdom, to that of the experts in the field. Specifically, we explore three datasets: US Presidential Debates 2015-16, Grammy Awards 2013, Super Bowl 2013. We determined if the opinions of the crowd and the experts match by using the sentiments of the tweets to predict the outcomes of the debates/Grammys/Super Bowl. We observed that in most of the cases, the predictions were right indicating that crowd wisdom is indeed worth looking at and mining sentiments in microblogs is useful. In some cases where there were disagreements, however, we observed that the opinions of the experts did have some influence on the opinions of the users. We also find that the features that were most useful in our case of multi-label classification was a combination of the document-embedding and topic features."
]
]
} | {
"question": [
"How many label options are there in the multi-label task?",
"What is the interannotator agreement of the crowd sourced users?",
"Who are the experts?",
"Who is the crowd in these experiments?",
"How do you establish the ground truth of who won a debate?"
],
"question_id": [
"5b029ad0d20b516ec11967baaf7d2006e8d7199f",
"79bd2ad4cb5c630ce69d5a859ed118132cae62d7",
"d3a1a53521f252f869fdae944db986931d9ffe48",
"38e11663b03ac585863742044fd15a0e875ae9ab",
"14421b7ae4459b647033b3ccba635d4ba7bb114b"
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"Crowd Sourcing",
"Crowd Sourcing",
"Crowd Sourcing",
"Crowd Sourcing",
"Crowd Sourcing"
],
"question_writer": [
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" two labels "
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"For the sentiment analysis, we look at our problem in a multi-label setting, our two labels being sentiment polarity and the candidate/category in consideration. We test both single-label classifiers and multi-label ones on the problem and as intuition suggests, the multi-label classifier RaKel performs better. A combination of document-embedding features BIBREF3 and topic features (essentially the document-topic probabilities) BIBREF4 is shown to give the best results. These features make sense intuitively because the document-embedding features take context of the text into account, which is important for sentiment polarity classification, and topic features take into account the topic of the tweet (who/what is it about)."
],
"highlighted_evidence": [
"For the sentiment analysis, we look at our problem in a multi-label setting, our two labels being sentiment polarity and the candidate/category in consideration."
]
}
],
"annotation_id": [
"f588b9e1ac46f0432c5b9cee19c87eacd98a2d60"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"f74daa43b9e6e113e7b630379a6dfd505c7654ac"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"political pundits of the Washington Post"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The prediction of outcomes of debates is very interesting in our case. Most of the results seem to match with the views of some experts such as the political pundits of the Washington Post. This implies that certain rules that were used to score the candidates in the debates by said-experts were in fact reflected by reading peoples' sentiments expressed over social media. This opens up a wide variety of learning possibilities from users' sentiments on social media, which is sometimes referred to as the wisdom of crowd."
],
"highlighted_evidence": [
"Most of the results seem to match with the views of some experts such as the political pundits of the Washington Post."
]
},
{
"unanswerable": false,
"extractive_spans": [
"the experts in the field"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"This paper presents a study that compares the opinions of users on microblogs, which is essentially the crowd wisdom, to that of the experts in the field. Specifically, we explore three datasets: US Presidential Debates 2015-16, Grammy Awards 2013, Super Bowl 2013. We determined if the opinions of the crowd and the experts match by using the sentiments of the tweets to predict the outcomes of the debates/Grammys/Super Bowl. We observed that in most of the cases, the predictions were right indicating that crowd wisdom is indeed worth looking at and mining sentiments in microblogs is useful. In some cases where there were disagreements, however, we observed that the opinions of the experts did have some influence on the opinions of the users. We also find that the features that were most useful in our case of multi-label classification was a combination of the document-embedding and topic features."
],
"highlighted_evidence": [
"This paper presents a study that compares the opinions of users on microblogs, which is essentially the crowd wisdom, to that of the experts in the field. Specifically, we explore three datasets: US Presidential Debates 2015-16, Grammy Awards 2013, Super Bowl 2013. "
]
}
],
"annotation_id": [
"402c5856b4c4d305261752fe90adcbff80630091",
"d568aa5549ece0078ef81e70ac1d992cd03d55ef"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" peoples' sentiments expressed over social media"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The prediction of outcomes of debates is very interesting in our case. Most of the results seem to match with the views of some experts such as the political pundits of the Washington Post. This implies that certain rules that were used to score the candidates in the debates by said-experts were in fact reflected by reading peoples' sentiments expressed over social media. This opens up a wide variety of learning possibilities from users' sentiments on social media, which is sometimes referred to as the wisdom of crowd."
],
"highlighted_evidence": [
"The prediction of outcomes of debates is very interesting in our case. Most of the results seem to match with the views of some experts such as the political pundits of the Washington Post. This implies that certain rules that were used to score the candidates in the debates by said-experts were in fact reflected by reading peoples' sentiments expressed over social media. This opens up a wide variety of learning possibilities from users' sentiments on social media, which is sometimes referred to as the wisdom of crowd."
]
}
],
"annotation_id": [
"e64c8b2b44254873743425f4e19427e0f572bd9f"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"experts in Washington Post"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Trend Analysis: We also analyze some certain trends of the debates. Firstly, we look at the change in sentiments of the users towards the candidates over time (hours, days, months). This is done by computing the sentiment scores for each candidate in each of the debates and seeing how it varies over time, across debates. Secondly, we examine the effect of Washington Post on the views of the users. This is done by looking at the sentiments of the candidates (to predict winners) of a debate before and after the winners are announced by the experts in Washington Post. This way, we can see if Washington Post has had any effect on the sentiments of the users. Besides that, to study the behavior of the users, we also look at the correlation of the tweet volume with the number of viewers as well as the variation of tweet volume over time (hours, days, months) for debates.",
"Next, we investigate how the sentiments of the users towards the candidates change before and after the debate. In essence, we examine how the debate and the results of the debates given by the experts affects the sentiment of the candidates. Figure FIGREF25 shows the sentiments of the users towards the candidate during the 5th Republican Debate, 15th December 2015. It can be seen that the sentiments of the users towards the candidates does indeed change over the course of two days. One particular example is that of Jeb Bush. It seems that the populace are generally prejudiced towards the candidates, which is reflected in their sentiments of the candidates on the day of the debate. The results of the Washington Post are released in the morning after the debate. One can see the winners suggested by the Washington Post in Table TABREF35. One of the winners in that debate according to them is Jeb Bush. Coincidentally, Figure FIGREF25 suggests that the sentiment of Bush has gone up one day after the debate (essentially, one day after the results given by the experts are out)."
],
"highlighted_evidence": [
"Secondly, we examine the effect of Washington Post on the views of the users. This is done by looking at the sentiments of the candidates (to predict winners) of a debate before and after the winners are announced by the experts in Washington Post. This way, we can see if Washington Post has had any effect on the sentiments of the users.",
"One can see the winners suggested by the Washington Post in Table TABREF35. "
]
}
],
"annotation_id": [
"c9727499e5e115abb47808d195813967a53a96b9"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
]
} | {
"caption": [
"TABLE I: Debates chosen, listed in chronological order. A total of 10 debates were considered out of which 7 are Republican and 3 are Democratic.",
"TABLE II: Statistics of the Data Collected: Debates",
"Fig. 1: Histograms of Tweet Frequency vs. Debates and TV Viewers vs. Debates shown side-by-side for comparison. The red bars correspond to the Republican debates and the blue bars correspond to the Democratic debates.",
"Fig. 2: Tweet Frequency vs. Days for the 5th Republican Debate (15th December 2015).",
"Fig. 4: Sentiments of the users towards the candidates across Debates.",
"Fig. 3: Tweet Frequency vs. Hours for the 5th Republican Debate (15th December 2015).",
"Fig. 5: Graphs showing how the sentiments of the users towards the candidates before and after the debates.",
"TABLE V: Sentiment Analysis for the Presidential Debates: f1 stands for 1-gram, f2 stands for 2-gram, f3 stands for (1+2)-gram, f4 stands for (1+2)-gram+MISC, f5 stands for DOC, f6 stands for DOC + TOPIC.",
"TABLE VII: Sentiment Analysis for the 2013 Superbowl",
"TABLE VI: Sentiment Analysis for the 2013 Grammy Awards",
"TABLE VIII: Outcome Prediction based on Tweet Sentiment: the superscript on the candidates indicates the predicted ordering",
"TABLE IX: Outcome Prediction for the 2013 Grammy awards.",
"TABLE XI: Metric Results for Outcome Prediction"
],
"file": [
"3-TableI-1.png",
"3-TableII-1.png",
"5-Figure1-1.png",
"5-Figure2-1.png",
"5-Figure4-1.png",
"5-Figure3-1.png",
"6-Figure5-1.png",
"7-TableV-1.png",
"7-TableVII-1.png",
"7-TableVI-1.png",
"8-TableVIII-1.png",
"8-TableIX-1.png",
"8-TableXI-1.png"
]
} |
1910.03891 | Learning High-order Structural and Attribute information by Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding | The goal of representation learning of knowledge graph is to encode both entities and relations into a low-dimensional embedding spaces. Many recent works have demonstrated the benefits of knowledge graph embedding on knowledge graph completion task, such as relation extraction. However, we observe that: 1) existing method just take direct relations between entities into consideration and fails to express high-order structural relationship between entities; 2) these methods just leverage relation triples of KGs while ignoring a large number of attribute triples that encoding rich semantic information. To overcome these limitations, this paper propose a novel knowledge graph embedding method, named KANE, which is inspired by the recent developments of graph convolutional networks (GCN). KANE can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework. Empirical results on three datasets show that KANE significantly outperforms seven state-of-arts methods. Further analysis verify the efficiency of our method and the benefits brought by the attention mechanism. | {
"section_name": [
"Introduction",
"Related Work",
"Problem Formulation",
"Proposed Model",
"Proposed Model ::: Overall Architecture",
"Proposed Model ::: Attribute Embedding Layer",
"Proposed Model ::: Embedding Propagation Layer",
"Proposed Model ::: Output Layer and Training Details",
"Experiments ::: Date sets",
"Experiments ::: Experiments Setting",
"Experiments ::: Entity Classification ::: Evaluation Protocol.",
"Experiments ::: Entity Classification ::: Test Performance.",
"Experiments ::: Entity Classification ::: Efficiency Evaluation.",
"Experiments ::: Knowledge Graph Completion",
"Conclusion and Future Work"
],
"paragraphs": [
[
"In the past decade, many large-scale Knowledge Graphs (KGs), such as Freebase BIBREF0, DBpedia BIBREF1 and YAGO BIBREF2 have been built to represent human complex knowledge about the real-world in the machine-readable format. The facts in KGs are usually encoded in the form of triples $(\\textit {head entity}, relation, \\textit {tail entity})$ (denoted $(h, r, t)$ in this study) through the Resource Description Framework, e.g.,$(\\textit {Donald Trump}, Born In, \\textit {New York City})$. Figure FIGREF2 shows the subgraph of knowledge graph about the family of Donald Trump. In many KGs, we can observe that some relations indicate attributes of entities, such as the $\\textit {Born}$ and $\\textit {Abstract}$ in Figure FIGREF2, and others indicates the relations between entities (the head entity and tail entity are real world entity). Hence, the relationship in KG can be divided into relations and attributes, and correspondingly two types of triples, namely relation triples and attribute triples BIBREF3. A relation triples in KGs represents relationship between entities, e.g.,$(\\textit {Donald Trump},Father of, \\textit {Ivanka Trump})$, while attribute triples denote a literal attribute value of an entity, e.g.,$(\\textit {Donald Trump},Born, \\textit {\"June 14, 1946\"})$.",
"Knowledge graphs have became important basis for many artificial intelligence applications, such as recommendation system BIBREF4, question answering BIBREF5 and information retrieval BIBREF6, which is attracting growing interests in both academia and industry communities. A common approach to apply KGs in these artificial intelligence applications is through embedding, which provide a simple method to encode both entities and relations into a continuous low-dimensional embedding spaces. Hence, learning distributional representation of knowledge graph has attracted many research attentions in recent years. TransE BIBREF7 is a seminal work in representation learning low-dimensional vectors for both entities and relations. The basic idea behind TransE is that the embedding $\\textbf {t}$ of tail entity should be close to the head entity's embedding $\\textbf {r}$ plus the relation vector $\\textbf {t}$ if $(h, r, t)$ holds, which indicates $\\textbf {h}+\\textbf {r}\\approx \\textbf {t}$. This model provide a flexible way to improve the ability in completing the KGs, such as predicating the missing items in knowledge graph. Since then, several methods like TransH BIBREF8 and TransR BIBREF9, which represent the relational translation in other effective forms, have been proposed. Recent attempts focused on either incorporating extra information beyond KG triples BIBREF10, BIBREF11, BIBREF12, BIBREF13, or designing more complicated strategies BIBREF14, BIBREF15, BIBREF16.",
"While these methods have achieved promising results in KG completion and link predication, existing knowledge graph embedding methods still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. We argue that the high-order structural relationship between entities also contain rich semantic relationships and incorporating these information can improve model performance. For example the fact $\\textit {Donald Trump}\\stackrel{Father of}{\\longrightarrow }\\textit {Ivanka Trump}\\stackrel{Spouse}{\\longrightarrow }\\textit {Jared Kushner} $ indicates the relationship between entity Donald Trump and entity Jared Kushner. Several path-based methods have attempted to take multiple-step relation paths into consideration for learning high-order structural information of KGs BIBREF17, BIBREF18. But note that huge number of paths posed a critical complexity challenge on these methods. In order to enable efficient path modeling, these methods have to make approximations by sampling or applying path selection algorithm. We argue that making approximations has a large impact on the final performance.",
"Second, to the best of our knowledge, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. Therefore, these methods easily suffer from sparseness and incompleteness of knowledge graph. Even worse, structure information usually cannot distinguish the different meanings of relations and entities in different triples. We believe that these rich information encoded in attribute triples can help explore rich semantic information and further improve the performance of knowledge graph. For example, we can learn date of birth and abstraction from values of Born and Abstract about Donald Trump in Figure FIGREF2. There are a huge number of attribute triples in real KGs, for example the statistical results in BIBREF3 shows attribute triples are three times as many as relationship triples in English DBpedia (2016-04). Recent a few attempts try to incorporate attribute triples BIBREF11, BIBREF12. However, these are two limitations existing in these methods. One is that only a part of attribute triples are used in the existing methods, such as only entity description is used in BIBREF12. The other is some attempts try to jointly model the attribute triples and relation triples in one unified optimization problem. The loss of two kinds triples has to be carefully balanced during optimization. For example, BIBREF3 use hyper-parameters to weight the loss of two kinds triples in their models.",
"Considering limitations of existing knowledge graph embedding methods, we believe it is of critical importance to develop a model that can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner. Towards this end, inspired by the recent developments of graph convolutional networks (GCN) BIBREF19, which have the potential of achieving the goal but have not been explored much for knowledge graph embedding, we propose Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding (KANE). The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Specifically, two carefully designs are equipped in KANE to correspondingly address the above two challenges: 1) recursive embedding propagation based on relation triples, which updates a entity embedding. Through performing such recursively embedding propagation, the high-order structural information of kGs can be successfully captured in a linear time complexity; and 2) multi-head attention-based aggregation. The weight of each attribute triples can be learned through applying the neural attention mechanism BIBREF20.",
"In experiments, we evaluate our model on two KGs tasks including knowledge graph completion and entity classification. Experimental results on three datasets shows that our method can significantly outperforms state-of-arts methods.",
"The main contributions of this study are as follows:",
"1) We highlight the importance of explicitly modeling the high-order structural and attribution information of KGs to provide better knowledge graph embedding.",
"2) We proposed a new method KANE, which achieves can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework.",
"3) We conduct experiments on three datasets, demonstrating the effectiveness of KANE and its interpretability in understanding the importance of high-order relations."
],
[
"In recent years, there are many efforts in Knowledge Graph Embeddings for KGs aiming to encode entities and relations into a continuous low-dimensional embedding spaces. Knowledge Graph Embedding provides a very simply and effective methods to apply KGs in various artificial intelligence applications. Hence, Knowledge Graph Embeddings has attracted many research attentions in recent years. The general methodology is to define a score function for the triples and finally learn the representations of entities and relations by minimizing the loss function $f_r(h,t)$, which implies some types of transformations on $\\textbf {h}$ and $\\textbf {t}$. TransE BIBREF7 is a seminal work in knowledge graph embedding, which assumes the embedding $\\textbf {t}$ of tail entity should be close to the head entity's embedding $\\textbf {r}$ plus the relation vector $\\textbf {t}$ when $(h, r, t)$ holds as mentioned in section “Introduction\". Hence, TransE defines the following loss function:",
"TransE regarding the relation as a translation between head entity and tail entity is inspired by the word2vec BIBREF21, where relationships between words often correspond to translations in latent feature space. This model achieves a good trade-off between computational efficiency and accuracy in KGs with thousands of relations. but this model has flaws in dealing with one-to-many, many-to-one and many-to-many relations.",
"In order to address this issue, TransH BIBREF8 models a relation as a relation-specific hyperplane together with a translation on it, allowing entities to have distinct representation in different relations. TransR BIBREF9 models entities and relations in separate spaces, i.e., entity space and relation spaces, and performs translation from entity spaces to relation spaces. TransD BIBREF22 captures the diversity of relations and entities simultaneously by defining dynamic mapping matrix. Recent attempts can be divided into two categories: (i) those which tries to incorporate additional information to further improve the performance of knowledge graph embedding, e.g., entity types or concepts BIBREF13, relations paths BIBREF17, textual descriptions BIBREF11, BIBREF12 and logical rules BIBREF23; (ii) those which tries to design more complicated strategies, e.g., deep neural network models BIBREF24.",
"Except for TransE and its extensions, some efforts measure plausibility by matching latent semantics of entities and relations. The basic idea behind these models is that the plausible triples of a KG is assigned low energies. For examples, Distant Model BIBREF25 defines two different projections for head and tail entity in a specific relation, i.e., $\\textbf {M}_{r,1}$ and $\\textbf {M}_{r,2}$. It represents the vectors of head and tail entity can be transformed by these two projections. The loss function is $f_r(h,t)=||\\textbf {M}_{r,1}\\textbf {h}-\\textbf {M}_{r,2}\\textbf {t}||_{1}$.",
"Our KANE is conceptually advantageous to existing methods in that: 1) it directly factors high-order relations into the predictive model in linear time which avoids the labor intensive process of materializing paths, thus is more efficient and convenient to use; 2) it directly encodes all attribute triples in learning representation of entities which can capture rich semantic information and further improve the performance of knowledge graph embedding, and 3) KANE can directly factors high-order relations and attribute information into the predictive model in an efficient, explicit and unified manner, thus all related parameters are tailored for optimizing the embedding objective."
],
[
"In this study, wo consider two kinds of triples existing in KGs: relation triples and attribute triples. Relation triples denote the relation between entities, while attribute triples describe attributes of entities. Both relation and attribute triples denotes important information about entity, we will take both of them into consideration in the task of learning representation of entities. We let $I $ denote the set of IRIs (Internationalized Resource Identifier), $B $ are the set of blank nodes, and $L $ are the set of literals (denoted by quoted strings). The relation triples and attribute triples can be formalized as follows:",
"Definition 1. Relation and Attribute Triples: A set of Relation triples $ T_{R} $ can be represented by $ T_{R} \\subset E \\times R \\times E $, where $E \\subset I \\cup B $ is set of entities, $R \\subset I$ is set of relations between entities. Similarly, $ T_{A} \\subset E \\times R \\times A $ is the set of attribute triples, where $ A \\subset I \\cup B \\cup L $ is the set of attribute values.",
"Definition 2. Knowledge Graph: A KG consists of a combination of relation triples in the form of $ (h, r, t)\\in T_{R} $, and attribute triples in form of $ (h, r, a)\\in T_{A} $. Formally, we represent a KG as $G=(E,R,A,T_{R},T_{A})$, where $E=\\lbrace h,t|(h,r,t)\\in T_{R} \\cup (h,r,a)\\in T_{A}\\rbrace $ is set of entities, $R =\\lbrace r|(h,r,t)\\in T_{R} \\cup (h,r,a)\\in T_{A}\\rbrace $ is set of relations, $A=\\lbrace a|(h,r,a)\\in T_{A}\\rbrace $, respectively.",
"The purpose of this study is try to use embedding-based model which can capture both high-order structural and attribute information of KGs that assigns a continuous representations for each element of triples in the form $ (\\textbf {h}, \\textbf {r}, \\textbf {t})$ and $ (\\textbf {h}, \\textbf {r}, \\textbf {a})$, where Boldfaced $\\textbf {h}\\in \\mathbb {R}^{k}$, $\\textbf {r}\\in \\mathbb {R}^{k}$, $\\textbf {t}\\in \\mathbb {R}^{k}$ and $\\textbf {a}\\in \\mathbb {R}^{k}$ denote the embedding vector of head entity $h$, relation $r$, tail entity $t$ and attribute $a$ respectively.",
"Next, we detail our proposed model which models both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework."
],
[
"In this section, we present the proposed model in detail. We first introduce the overall framework of KANE, then discuss the input embedding of entities, relations and values in KGs, the design of embedding propagation layers based on graph attention network and the loss functions for link predication and entity classification task, respectively."
],
[
"The process of KANE is illustrated in Figure FIGREF2. We introduce the architecture of KANE from left to right. As shown in Figure FIGREF2, the whole triples of knowledge graph as input. The task of attribute embedding lays is embedding every value in attribute triples into a continuous vector space while preserving the semantic information. To capture both high-order structural information of KGs, we used an attention-based embedding propagation method. This method can recursively propagate the embeddings of entities from an entity's neighbors, and aggregate the neighbors with different weights. The final embedding of entities, relations and values are feed into two different deep neural network for two different tasks including link predication and entity classification."
],
[
"The value in attribute triples usually is sentence or a word. To encode the representation of value from its sentence or word, we need to encode the variable-length sentences to a fixed-length vector. In this study, we adopt two different encoders to model the attribute value.",
"Bag-of-Words Encoder. The representation of attribute value can be generated by a summation of all words embeddings of values. We denote the attribute value $a$ as a word sequence $a = w_{1},...,w_{n}$, where $w_{i}$ is the word at position $i$. The embedding of $\\textbf {a}$ can be defined as follows.",
"where $\\textbf {w}_{i}\\in \\mathbb {R}^{k}$ is the word embedding of $w_{i}$.",
"Bag-of-Words Encoder is a simple and intuitive method, which can capture the relative importance of words. But this method suffers in that two strings that contains the same words with different order will have the same representation.",
"LSTM Encoder. In order to overcome the limitation of Bag-of-Word encoder, we consider using LSTM networks to encoder a sequence of words in attribute value into a single vector. The final hidden state of the LSTM networks is selected as a representation of the attribute value.",
"where $f_{lstm}$ is the LSTM network."
],
[
"Next we describe the details of recursively embedding propagation method building upon the architecture of graph convolution network. Moreover, by exploiting the idea of graph attention network, out method learn to assign varying levels of importance to entity in every entity's neighborhood and can generate attentive weights of cascaded embedding propagation. In this study, embedding propagation layer consists of two mainly components: attentive embedding propagation and embedding aggregation. Here, we start by describing the attentive embedding propagation.",
"Attentive Embedding Propagation: Considering an KG $G$, the input to our layer is a set of entities, relations and attribute values embedding. We use $\\textbf {h}\\in \\mathbb {R}^{k}$ to denote the embedding of entity $h$. The neighborhood of entity $h$ can be described by $\\mathcal {N}_{h} = \\lbrace t,a|(h,r,t)\\in T_{R} \\cup (h,r,a)\\in T_{A}\\rbrace $. The purpose of attentive embedding propagation is encode $\\mathcal {N}_{h}$ and output a vector $\\vec{\\textbf {h}}$ as the new embedding of entity $h$.",
"In order to obtain sufficient expressive power, one learnable linear transformation $\\textbf {W}\\in \\mathbb {R}^{k^{^{\\prime }} \\times k}$ is adopted to transform the input embeddings into higher level feature space. In this study, we take a triple $(h,r,t)$ as example and the output a vector $\\vec{\\textbf {h}}$ can be formulated as follows:",
"where $\\pi (h,r,t)$ is attention coefficients which indicates the importance of entity's $t$ to entities $h$ .",
"In this study, the attention coefficients also control how many information being propagated from its neighborhood through the relation. To make attention coefficients easily comparable between different entities, the attention coefficient $\\pi (h,r,t)$ can be computed using a softmax function over all the triples connected with $h$. The softmax function can be formulated as follows:",
"Hereafter, we implement the attention coefficients $\\pi (h,r,t)$ through a single-layer feedforward neural network, which is formulated as follows:",
"where the leakyRelu is selected as activation function.",
"As shown in Equation DISPLAY_FORM13, the attention coefficient score is depend on the distance head entity $h$ and the tail entity $t$ plus the relation $r$, which follows the idea behind TransE that the embedding $\\textbf {t}$ of head entity should be close to the tail entity's embedding $\\textbf {r}$ plus the relation vector $\\textbf {t}$ if $(h, r, t)$ holds.",
"Embedding Aggregation. To stabilize the learning process of attention, we perform multi-head attention on final layer. Specifically, we use $m$ attention mechanism to execute the transformation of Equation DISPLAY_FORM11. A aggregators is needed to combine all embeddings of multi-head graph attention layer. In this study, we adapt two types of aggregators:",
"Concatenation Aggregator concatenates all embeddings of multi-head graph attention, followed by a nonlinear transformation:",
"where $\\mathop {\\Big |\\Big |}$ represents concatenation, $ \\pi (h,r,t)^{i}$ are normalized attention coefficient computed by the $i$-th attentive embedding propagation, and $\\textbf {W}^{i}$ denotes the linear transformation of input embedding.",
"Averaging Aggregator sums all embeddings of multi-head graph attention and the output embedding in the final is calculated applying averaging:",
"In order to encode the high-order connectivity information in KGs, we use multiple embedding propagation layers to gathering the deep information propagated from the neighbors. More formally, the embedding of entity $h$ in $l$-th layers can be defined as follows:",
"After performing $L$ embedding propagation layers, we can get the final embedding of entities, relations and attribute values, which include both high-order structural and attribute information of KGs. Next, we discuss the loss functions of KANE for two different tasks and introduce the learning and optimization detail."
],
[
"Here, we introduce the learning and optimization details for our method. Two different loss functions are carefully designed fro two different tasks of KG, which include knowledge graph completion and entity classification. Next details of these two loss functions are discussed.",
"knowledge graph completion. This task is a classical task in knowledge graph representation learning community. Specifically, two subtasks are included in knowledge graph completion: entity predication and link predication. Entity predication aims to infer the impossible head/tail entities in testing datasets when one of them is missing, while the link predication focus on complete a triple when relation is missing. In this study, we borrow the idea of translational scoring function from TransE, which the embedding $\\textbf {t}$ of tail entity should be close to the head entity's embedding $\\textbf {r}$ plus the relation vector $\\textbf {t}$ if $(h, r, t)$ holds, which indicates $d(h+r,t)= ||\\textbf {h}+\\textbf {r}- \\textbf {t}||$. Specifically, we train our model using hinge-loss function, given formally as",
"where $\\gamma >0$ is a margin hyper-parameter, $[x ]_{+}$ denotes the positive part of $x$, $T=T_{R} \\cup T_{A}$ is the set of valid triples, and $T^{\\prime }$ is set of corrupted triples which can be formulated as:",
"Entity Classification. For the task of entity classification, we simple uses a fully connected layers and binary cross-entropy loss (BCE) over sigmoid activation on the output of last layer. We minimize the binary cross-entropy on all labeled entities, given formally as:",
"where $E_{D}$ is the set of entities indicates have labels, $C$ is the dimension of the output features, which is equal to the number of classes, $y_{ej}$ is the label indicator of entity $e$ for $j$-th class, and $\\sigma (x)$ is sigmoid function $\\sigma (x) = \\frac{1}{1+e^{-x}}$.",
"We optimize these two loss functions using mini-batch stochastic gradient decent (SGD) over the possible $\\textbf {h}$, $\\textbf {r}$, $\\textbf {t}$, with the chin rule that applying to update all parameters. At each step, we update the parameter $\\textbf {h}^{\\tau +1}\\leftarrow \\textbf {h}^{\\tau }-\\lambda \\nabla _{\\textbf {h}}\\mathcal {L}$, where $\\tau $ labels the iteration step and $\\lambda $ is the learning rate."
],
[
"In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24."
],
[
"In evaluation, we compare our method with three types of models:",
"1) Typical Methods. Three typical knowledge graph embedding methods includes TransE, TransR and TransH are selected as baselines. For TransE, the dissimilarity measure is implemented with L1-norm, and relation as well as entity are replaced during negative sampling. For TransR, we directly use the source codes released in BIBREF9. In order for better performance, the replacement of relation in negative sampling is utilized according to the suggestion of author.",
"2) Path-based Methods. We compare our method with two typical path-based model include PTransE, and ALL-PATHS BIBREF18. PTransE is the first method to model relation path in KG embedding task, and ALL-PATHS improve the PTransE through a dynamic programming algorithm which can incorporate all relation paths of bounded length.",
"3) Attribute-incorporated Methods. Several state-of-art attribute-incorporated methods including R-GCN BIBREF24 and KR-EAR BIBREF26 are used to compare with our methods on three real datasets.",
"In addition, four variants of KANE which each of which correspondingly defines its specific way of computing the attribute value embedding and embedding aggregation are used as baseline in evaluation. In this study, we name four three variants as KANE (BOW+Concatenation), KANE (BOW+Average), and KANE (LSTM+Concatenation), KANE (LSTM+Average). Our method is learned with mini-batch SGD. As for hyper-parameters, we select batch size among {16, 32, 64, 128}, learning rate $\\lambda $ for SGD among {0.1, 0.01, 0.001}. For a fair comparison, we also set the vector dimensions of all entity and relation to the same $k \\in ${128, 258, 512, 1024}, the same dissimilarity measure $l_{1}$ or $l_{2}$ distance in loss function, and the same number of negative examples $n$ among {1, 10, 20, 40}. The training time on both data sets is limited to at most 400 epochs. The best models are selected by a grid search and early stopping on validation sets."
],
[
"In entity classification, the aim is to predicate the type of entity. For all baseline models, we first get the entity embedding in different datasets through default parameter settings as in their original papers or implementations.Then, Logistic Regression is used as classifier, which regards the entity's embeddings as feature of classifier. In evaluation, we random selected 10% of training set as validation set and accuracy as evaluation metric."
],
[
"Experimental results of entity classification on the test sets of all the datasets is shown in Table TABREF25. The results is clearly demonstrate that our proposed method significantly outperforms state-of-art results on accuracy for three datasets. For more in-depth performance analysis, we note: (1) Among all baselines, Path-based methods and Attribute-incorporated methods outperform three typical methods. This indicates that incorporating extra information can improve the knowledge graph embedding performance; (2) Four variants of KANE always outperform baseline methods. The main reasons why KANE works well are two fold: 1) KANE can capture high-order structural information of KGs in an efficient, explicit manner and passe these information to their neighboring; 2) KANE leverages rich information encoded in attribute triples. These rich semantic information can further improve the performance of knowledge graph; (3) The variant of KANE that use LSTM Encoder and Concatenation aggregator outperform other variants. The main reasons is that LSTM encoder can distinguish the word order and concatenation aggregator combine all embedding of multi-head attention in a higher leaver feature space, which can obtain sufficient expressive power."
],
[
"Figure FIGREF30 shows the test accuracy with increasing epoch on DBP24K and Game30K. We can see that test accuracy first rapidly increased in the first ten iterations, but reaches a stable stages when epoch is larger than 40. Figure FIGREF31 shows test accuracy with different embedding size and training data proportions. We can note that too small embedding size or training data proportions can not generate sufficient global information. In order to further analysis the embeddings learned by our method, we use t-SNE tool BIBREF27 to visualize the learned embedding. Figure FIGREF32 shows the visualization of 256 dimensional entity's embedding on Game30K learned by KANE, R-GCN, PransE and TransE. We observe that our method can learn more discriminative entity's embedding than other other methods."
],
[
"The purpose of knowledge graph completion is to complete a triple $(h, r, t)$ when one of $h, r, t$ is missing, which is used many literature BIBREF7. Two measures are considered as our evaluation metrics: (1) the mean rank of correct entities or relations (Mean Rank); (2) the proportion of correct entities or relations ranked in top1 (Hits@1, for relations) or top 10 (Hits@10, for entities). Following the setting in BIBREF7, we also adopt the two evaluation settings named \"raw\" and \"filter\" in order to avoid misleading behavior.",
"The results of entity and relation predication on FB24K are shown in the Table TABREF33. This results indicates that KANE still outperforms other baselines significantly and consistently. This also verifies the necessity of modeling high-order structural and attribute information of KGs in Knowledge graph embedding models."
],
[
"Many recent works have demonstrated the benefits of knowledge graph embedding in knowledge graph completion, such as relation extraction. However, We argue that knowledge graph embedding method still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. Second, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. In order to overcome these limitation, inspired by the recent developments of graph convolutional networks, we propose a new knowledge graph embedding methods, named KANE. The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Empirical results on three datasets show that KANE significantly outperforms seven state-of-arts methods."
]
]
} | {
"question": [
"How much better is performance of proposed method than state-of-the-art methods in experiments?",
"What further analysis is done?",
"What seven state-of-the-art methods are used for comparison?",
"What three datasets are used to measure performance?",
"How does KANE capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner?",
"What are recent works on knowedge graph embeddings authors mention?"
],
"question_id": [
"52f7e42fe8f27d800d1189251dfec7446f0e1d3b",
"00e6324ecd454f5d4b2a4b27fcf4104855ff8ee2",
"aa0d67c2a1bc222d1f2d9e5d51824352da5bb6dc",
"cf0085c1d7bd9bc9932424e4aba4e6812d27f727",
"586b7470be91efe246c3507b05e30651ea6b9832",
"31b20a4bab09450267dfa42884227103743e3426"
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Accuracy of best proposed method KANE (LSTM+Concatenation) are 0.8011, 0.8592, 0.8605 compared to best state-of-the art method R-GCN + LR 0.7721, 0.8193, 0.8229 on three datasets respectively.",
"evidence": [
"Experimental results of entity classification on the test sets of all the datasets is shown in Table TABREF25. The results is clearly demonstrate that our proposed method significantly outperforms state-of-art results on accuracy for three datasets. For more in-depth performance analysis, we note: (1) Among all baselines, Path-based methods and Attribute-incorporated methods outperform three typical methods. This indicates that incorporating extra information can improve the knowledge graph embedding performance; (2) Four variants of KANE always outperform baseline methods. The main reasons why KANE works well are two fold: 1) KANE can capture high-order structural information of KGs in an efficient, explicit manner and passe these information to their neighboring; 2) KANE leverages rich information encoded in attribute triples. These rich semantic information can further improve the performance of knowledge graph; (3) The variant of KANE that use LSTM Encoder and Concatenation aggregator outperform other variants. The main reasons is that LSTM encoder can distinguish the word order and concatenation aggregator combine all embedding of multi-head attention in a higher leaver feature space, which can obtain sufficient expressive power.",
"FLOAT SELECTED: Table 2: Entity classification results in accuracy. We run all models 10 times and report mean ± standard deviation. KANE significantly outperforms baselines on FB24K, DBP24K and Game30K."
],
"highlighted_evidence": [
"Experimental results of entity classification on the test sets of all the datasets is shown in Table TABREF25. The results is clearly demonstrate that our proposed method significantly outperforms state-of-art results on accuracy for three datasets.",
"FLOAT SELECTED: Table 2: Entity classification results in accuracy. We run all models 10 times and report mean ± standard deviation. KANE significantly outperforms baselines on FB24K, DBP24K and Game30K."
]
}
],
"annotation_id": [
"b27e860ab0d3f3d3c9f7fe0a2f8907d38965d7a2"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"we use t-SNE tool BIBREF27 to visualize the learned embedding"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Figure FIGREF30 shows the test accuracy with increasing epoch on DBP24K and Game30K. We can see that test accuracy first rapidly increased in the first ten iterations, but reaches a stable stages when epoch is larger than 40. Figure FIGREF31 shows test accuracy with different embedding size and training data proportions. We can note that too small embedding size or training data proportions can not generate sufficient global information. In order to further analysis the embeddings learned by our method, we use t-SNE tool BIBREF27 to visualize the learned embedding. Figure FIGREF32 shows the visualization of 256 dimensional entity's embedding on Game30K learned by KANE, R-GCN, PransE and TransE. We observe that our method can learn more discriminative entity's embedding than other other methods."
],
"highlighted_evidence": [
"In order to further analysis the embeddings learned by our method, we use t-SNE tool BIBREF27 to visualize the learned embedding."
]
}
],
"annotation_id": [
"0015edbc5f0346d09d14eb8118aaf4d850f19556"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"TransE, TransR and TransH",
"PTransE, and ALL-PATHS",
"R-GCN BIBREF24 and KR-EAR BIBREF26"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"1) Typical Methods. Three typical knowledge graph embedding methods includes TransE, TransR and TransH are selected as baselines. For TransE, the dissimilarity measure is implemented with L1-norm, and relation as well as entity are replaced during negative sampling. For TransR, we directly use the source codes released in BIBREF9. In order for better performance, the replacement of relation in negative sampling is utilized according to the suggestion of author.",
"2) Path-based Methods. We compare our method with two typical path-based model include PTransE, and ALL-PATHS BIBREF18. PTransE is the first method to model relation path in KG embedding task, and ALL-PATHS improve the PTransE through a dynamic programming algorithm which can incorporate all relation paths of bounded length.",
"3) Attribute-incorporated Methods. Several state-of-art attribute-incorporated methods including R-GCN BIBREF24 and KR-EAR BIBREF26 are used to compare with our methods on three real datasets."
],
"highlighted_evidence": [
"1) Typical Methods. Three typical knowledge graph embedding methods includes TransE, TransR and TransH are selected as baselines.",
"2) Path-based Methods. We compare our method with two typical path-based model include PTransE, and ALL-PATHS BIBREF18.",
"3) Attribute-incorporated Methods. Several state-of-art attribute-incorporated methods including R-GCN BIBREF24 and KR-EAR BIBREF26 are used to compare with our methods on three real datasets."
]
}
],
"annotation_id": [
"60863ee85123b18acf4f57b81e292c1ce2f19fc1"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"FB24K",
"DBP24K",
"Game30K"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24."
],
"highlighted_evidence": [
"First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K."
]
},
{
"unanswerable": false,
"extractive_spans": [
"Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24."
],
"highlighted_evidence": [
"In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24."
]
}
],
"annotation_id": [
"7f3bec79e3400d3867b79b98f14c7b312b109ab7",
"c70897a9aaf396da5ce44f08ae000d6f238bfc88"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"To capture both high-order structural information of KGs, we used an attention-based embedding propagation method."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The process of KANE is illustrated in Figure FIGREF2. We introduce the architecture of KANE from left to right. As shown in Figure FIGREF2, the whole triples of knowledge graph as input. The task of attribute embedding lays is embedding every value in attribute triples into a continuous vector space while preserving the semantic information. To capture both high-order structural information of KGs, we used an attention-based embedding propagation method. This method can recursively propagate the embeddings of entities from an entity's neighbors, and aggregate the neighbors with different weights. The final embedding of entities, relations and values are feed into two different deep neural network for two different tasks including link predication and entity classification."
],
"highlighted_evidence": [
"The task of attribute embedding lays is embedding every value in attribute triples into a continuous vector space while preserving the semantic information. To capture both high-order structural information of KGs, we used an attention-based embedding propagation method.",
"The final embedding of entities, relations and values are feed into two different deep neural network for two different tasks including link predication and entity classification."
]
}
],
"annotation_id": [
"c722c3ab454198d9287cddd3713f3785a8ade0ef"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"entity types or concepts BIBREF13",
"relations paths BIBREF17",
" textual descriptions BIBREF11, BIBREF12",
"logical rules BIBREF23",
"deep neural network models BIBREF24"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In order to address this issue, TransH BIBREF8 models a relation as a relation-specific hyperplane together with a translation on it, allowing entities to have distinct representation in different relations. TransR BIBREF9 models entities and relations in separate spaces, i.e., entity space and relation spaces, and performs translation from entity spaces to relation spaces. TransD BIBREF22 captures the diversity of relations and entities simultaneously by defining dynamic mapping matrix. Recent attempts can be divided into two categories: (i) those which tries to incorporate additional information to further improve the performance of knowledge graph embedding, e.g., entity types or concepts BIBREF13, relations paths BIBREF17, textual descriptions BIBREF11, BIBREF12 and logical rules BIBREF23; (ii) those which tries to design more complicated strategies, e.g., deep neural network models BIBREF24."
],
"highlighted_evidence": [
"Recent attempts can be divided into two categories: (i) those which tries to incorporate additional information to further improve the performance of knowledge graph embedding, e.g., entity types or concepts BIBREF13, relations paths BIBREF17, textual descriptions BIBREF11, BIBREF12 and logical rules BIBREF23; (ii) those which tries to design more complicated strategies, e.g., deep neural network models BIBREF24."
]
}
],
"annotation_id": [
"e288360a2009adb48d0b87242ef71a9e1734a82b"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Figure 1: Subgraph of a knowledge graph contains entities, relations and attributes.",
"Figure 2: Illustration of the KANE architecture.",
"Table 1: The statistics of datasets.",
"Table 2: Entity classification results in accuracy. We run all models 10 times and report mean ± standard deviation. KANE significantly outperforms baselines on FB24K, DBP24K and Game30K.",
"Figure 3: Test accuracy with increasing epoch.",
"Table 3: Results of knowledge graph completion (FB24K)",
"Figure 4: Test accuracy by varying parameter.",
"Figure 5: The t-SNE visualization of entity embeddings in Game30K."
],
"file": [
"1-Figure1-1.png",
"4-Figure2-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"6-Figure3-1.png",
"7-Table3-1.png",
"7-Figure4-1.png",
"7-Figure5-1.png"
]
} |
1610.00879 | A Computational Approach to Automatic Prediction of Drunk Texting | Alcohol abuse may lead to unsociable behavior such as crime, drunk driving, or privacy leaks. We introduce automatic drunk-texting prediction as the task of identifying whether a text was written when under the influence of alcohol. We experiment with tweets labeled using hashtags as distant supervision. Our classifiers use a set of N-gram and stylistic features to detect drunk tweets. Our observations present the first quantitative evidence that text contains signals that can be exploited to detect drunk-texting. | {
"section_name": [
"Introduction",
"Motivation",
"Definition and Challenges",
"Dataset Creation",
"Feature Design",
"Evaluation",
"Performance for Datasets 1 and 2",
"Performance for Held-out Dataset H",
"Error Analysis",
"Conclusion & Future Work"
],
"paragraphs": [
[
"The ubiquity of communication devices has made social media highly accessible. The content on these media reflects a user's day-to-day activities. This includes content created under the influence of alcohol. In popular culture, this has been referred to as `drunk-texting'. In this paper, we introduce automatic `drunk-texting prediction' as a computational task. Given a tweet, the goal is to automatically identify if it was written by a drunk user. We refer to tweets written under the influence of alcohol as `drunk tweets', and the opposite as `sober tweets'.",
"A key challenge is to obtain an annotated dataset. We use hashtag-based supervision so that the authors of the tweets mention if they were drunk at the time of posting a tweet. We create three datasets by using different strategies that are related to the use of hashtags. We then present SVM-based classifiers that use N-gram and stylistic features such as capitalisation, spelling errors, etc. Through our experiments, we make subtle points related to: (a) the performance of our features, (b) how our approach compares against human ability to detect drunk-texting, (c) most discriminative stylistic features, and (d) an error analysis that points to future work. To the best of our knowledge, this is a first study that shows the feasibility of text-based analysis for drunk-texting prediction."
],
[
"Past studies show the relation between alcohol abuse and unsociable behaviour such as aggression BIBREF0 , crime BIBREF1 , suicide attempts BIBREF2 , drunk driving BIBREF3 , and risky sexual behaviour BIBREF4 . suicide state that “those responsible for assessing cases of attempted suicide should be adept at detecting alcohol misuse”. Thus, a drunk-texting prediction system can be used to identify individuals susceptible to these behaviours, or for investigative purposes after an incident.",
"Drunk-texting may also cause regret. Mail Goggles prompts a user to solve math questions before sending an email on weekend evenings. Some Android applications avoid drunk-texting by blocking outgoing texts at the click of a button. However, to the best of our knowledge, these tools require a user command to begin blocking. An ongoing text-based analysis will be more helpful, especially since it offers a more natural setting by monitoring stream of social media text and not explicitly seeking user input. Thus, automatic drunk-texting prediction will improve systems aimed to avoid regrettable drunk-texting. To the best of our knowledge, ours is the first study that does a quantitative analysis, in terms of prediction of the drunk state by using textual clues.",
"Several studies have studied linguistic traits associated with emotion expression and mental health issues, suicidal nature, criminal status, etc. BIBREF5 , BIBREF6 . NLP techniques have been used in the past to address social safety and mental health issues BIBREF7 ."
],
[
"Drunk-texting prediction is the task of classifying a text as drunk or sober. For example, a tweet `Feeling buzzed. Can't remember how the evening went' must be predicted as `drunk', whereas, `Returned from work late today, the traffic was bad' must be predicted as `sober'. The challenges are:"
],
[
"We use hashtag-based supervision to create our datasets, similar to tasks like emotion classification BIBREF8 . The tweets are downloaded using Twitter API (https://dev.twitter.com/). We remove non-Unicode characters, and eliminate tweets that contain hyperlinks and also tweets that are shorter than 6 words in length. Finally, hashtags used to indicate drunk or sober tweets are removed so that they provide labels, but do not act as features. The dataset is available on request. As a result, we create three datasets, each using a different strategy for sober tweets, as follows:",
"The drunk tweets for Datasets 1 and 2 are the same. Figure FIGREF9 shows a word-cloud for these drunk tweets (with stop words and forms of the word `drunk' removed), created using WordItOut. The size of a word indicates its frequency. In addition to topical words such as `bar', `bottle' and `wine', the word-cloud shows sentiment words such as `love' or `damn', along with profane words.",
"Heuristics other than these hashtags could have been used for dataset creation. For example, timestamps were a good option to account for time at which a tweet was posted. However, this could not be used because user's local times was not available, since very few users had geolocation enabled."
],
[
"The complete set of features is shown in Table TABREF7 . There are two sets of features: (a) N-gram features, and (b) Stylistic features. We use unigrams and bigrams as N-gram features- considering both presence and count.",
"Table TABREF7 shows the complete set of stylistic features of our prediction system. POS ratios are a set of features that record the proportion of each POS tag in the dataset (for example, the proportion of nouns/adjectives, etc.). The POS tags and named entity mentions are obtained from NLTK BIBREF9 . Discourse connectors are identified based on a manually created list. Spelling errors are identified using a spell checker by enchant. The repeated characters feature captures a situation in which a word contains a letter that is repeated three or more times, as in the case of happpy. Since drunk-texting is often associated with emotional expression, we also incorporate a set of sentiment-based features. These features include: count/presence of emoticons and sentiment ratio. Sentiment ratio is the proportion of positive and negative words in the tweet. To determine positive and negative words, we use the sentiment lexicon in mpqa. To identify a more refined set of words that correspond to the two classes, we also estimated 20 topics for the dataset by estimating an LDA model BIBREF10 . We then consider top 10 words per topic, for both classes. This results in 400 LDA-specific unigrams that are then used as features."
],
[
"Using the two sets of features, we train SVM classifiers BIBREF11 . We show the five-fold cross-validation performance of our features on Datasets 1 and 2, in Section SECREF17 , and on Dataset H in Section SECREF21 . Section SECREF22 presents an error analysis. Accuracy, positive/negative precision and positive/negative recall are shown as A, PP/NP and PR/NR respectively. `Drunk' forms the positive class, while `Sober' forms the negative class."
],
[
"Table TABREF14 shows the performance for five-fold cross-validation for Datasets 1 and 2. In case of Dataset 1, we observe that N-gram features achieve an accuracy of 85.5%. We see that our stylistic features alone exhibit degraded performance, with an accuracy of 75.6%, in the case of Dataset 1. Table TABREF16 shows top stylistic features, when trained on the two datasets. Spelling errors, POS ratios for nouns (POS_NOUN), length and sentiment ratios appear in both lists, in addition to LDA-based unigrams. However, negative recall reduces to a mere 3.2%. This degradation implies that our features capture a subset of drunk tweets and that there are properties of drunk tweets that may be more subtle. When both N-gram and stylistic features are used, there is negligible improvement. The accuracy for Dataset 2 increases from 77.9% to 78.1%. Precision/Recall metrics do not change significantly either. The best accuracy of our classifier is 78.1% for all features, and 75.6% for stylistic features. This shows that text-based clues can indeed be used for drunk-texting prediction."
],
[
"Using held-out dataset H, we evaluate how our system performs in comparison to humans. Three annotators, A1-A3, mark each tweet in the Dataset H as drunk or sober. Table TABREF19 shows a moderate agreement between our annotators (for example, it is 0.42 for A1 and A2). Table TABREF20 compares our classifier with humans. Our human annotators perform the task with an average accuracy of 68.8%, while our classifier (with all features) trained on Dataset 2 reaches 64%. The classifier trained on Dataset 2 is better than which is trained on Dataset 1."
],
[
"Some categories of errors that occur are:",
"Incorrect hashtag supervision: The tweet `Can't believe I lost my bag last night, literally had everything in! Thanks god the bar man found it' was marked with`#Drunk'. However, this tweet is not likely to be a drunk tweet, but describes a drunk episode in retrospective. Our classifier predicts it as sober.",
"Seemingly sober tweets: Human annotators as well as our classifier could not identify whether `Will you take her on a date? But really she does like you' was drunk, although the author of the tweet had marked it so. This example also highlights the difficulty of drunk-texting prediction.",
"Pragmatic difficulty: The tweet `National dress of Ireland is one's one vomit.. my family is lovely' was correctly identified by our human annotators as a drunk tweet. This tweet contains an element of humour and topic change, but our classifier could not capture it."
],
[
"In this paper, we introduce automatic drunk-texting prediction as the task of predicting a tweet as drunk or sober. First, we justify the need for drunk-texting prediction as means of identifying risky social behavior arising out of alcohol abuse, and the need to build tools that avoid privacy leaks due to drunk-texting. We then highlight the challenges of drunk-texting prediction: one of the challenges is selection of negative examples (sober tweets). Using hashtag-based supervision, we create three datasets annotated with drunk or sober labels. We then present SVM-based classifiers which use two sets of features: N-gram and stylistic features. Our drunk prediction system obtains a best accuracy of 78.1%. We observe that our stylistic features add negligible value to N-gram features. We use our heldout dataset to compare how our system performs against human annotators. While human annotators achieve an accuracy of 68.8%, our system reaches reasonably close and performs with a best accuracy of 64%.",
"Our analysis of the task and experimental findings make a case for drunk-texting prediction as a useful and feasible NLP application."
]
]
} | {
"question": [
"Do they report results only on English data?",
"Do the authors mention any confounds to their study?",
"What baseline model is used?",
"What stylistic features are used to detect drunk texts?",
"Is the data acquired under distant supervision verified by humans at any stage?",
"What hashtags are used for distant supervision?",
"Do the authors equate drunk tweeting with drunk texting? "
],
"question_id": [
"45306b26447ea4b120655d6bb2e3636079d3d6e0",
"0c08af6e4feaf801185f2ec97c4da04c8b767ad6",
"6412e97373e8e9ae3aa20aa17abef8326dc05450",
"957bda6b421ef7d2839c3cec083404ac77721f14",
"368317b4fd049511e00b441c2e9550ded6607c37",
"b3ec918827cd22b16212265fcdd5b3eadee654ae",
"387970ebc7ef99f302f318d047f708274c0e8f21"
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five",
"five"
],
"topic_background": [
"",
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
"",
""
],
"search_query": [
"",
"",
"",
"",
"",
"",
""
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"FLOAT SELECTED: Figure 1: Word cloud for drunk tweets"
],
"highlighted_evidence": [
"FLOAT SELECTED: Figure 1: Word cloud for drunk tweets"
]
}
],
"annotation_id": [
"9673c8660ce783e03520c8e10c5ec0167cb2bce2"
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"A key challenge is to obtain an annotated dataset. We use hashtag-based supervision so that the authors of the tweets mention if they were drunk at the time of posting a tweet. We create three datasets by using different strategies that are related to the use of hashtags. We then present SVM-based classifiers that use N-gram and stylistic features such as capitalisation, spelling errors, etc. Through our experiments, we make subtle points related to: (a) the performance of our features, (b) how our approach compares against human ability to detect drunk-texting, (c) most discriminative stylistic features, and (d) an error analysis that points to future work. To the best of our knowledge, this is a first study that shows the feasibility of text-based analysis for drunk-texting prediction."
],
"highlighted_evidence": [
"Through our experiments, we make subtle points related to: (a) the performance of our features, (b) how our approach compares against human ability to detect drunk-texting, (c) most discriminative stylistic features, and (d) an error analysis that points to future work."
]
}
],
"annotation_id": [
"c3aebfe695d105d331a1b20e57ea7351ff9a6a0a"
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Human evaluators",
"evidence": [
"FLOAT SELECTED: Table 5: Performance of human evaluators and our classifiers (trained on all features), for Dataset-H as the test set"
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 5: Performance of human evaluators and our classifiers (trained on all features), for Dataset-H as the test set"
]
}
],
"annotation_id": [
"f8c23d7f79a2917e681146c5ac96156f70d8052b"
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "LDA unigrams (Presence/Count), POS Ratio, #Named Entity Mentions, #Discourse Connectors, Spelling errors, Repeated characters, Capitalisation, Length, Emoticon (Presence/Count ) \n and Sentiment Ratio",
"evidence": [
"FLOAT SELECTED: Table 1: Our Feature Set for Drunk-texting Prediction"
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Our Feature Set for Drunk-texting Prediction"
]
},
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "LDA unigrams (Presence/Count), POS Ratio, #Named Entity Mentions, #Discourse Connectors, Spelling errors, Repeated characters, Capitalization, Length, Emoticon (Presence/Count), Sentiment Ratio.",
"evidence": [
"FLOAT SELECTED: Table 1: Our Feature Set for Drunk-texting Prediction"
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Our Feature Set for Drunk-texting Prediction"
]
}
],
"annotation_id": [
"292a984fb6a227b6a54d3c36bde5d550a67b8329",
"7c7c413a0794b49fd5a8ec103b583532c56e4f7c"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"Using held-out dataset H, we evaluate how our system performs in comparison to humans. Three annotators, A1-A3, mark each tweet in the Dataset H as drunk or sober. Table TABREF19 shows a moderate agreement between our annotators (for example, it is 0.42 for A1 and A2). Table TABREF20 compares our classifier with humans. Our human annotators perform the task with an average accuracy of 68.8%, while our classifier (with all features) trained on Dataset 2 reaches 64%. The classifier trained on Dataset 2 is better than which is trained on Dataset 1."
],
"highlighted_evidence": [
"Three annotators, A1-A3, mark each tweet in the Dataset H as drunk or sober."
]
}
],
"annotation_id": [
"1d9d381cc6f219b819bf4445168e5bc27c65ffff"
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"2d5e36194e68acf93a75c8e44c93e33fe697ed42"
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"The ubiquity of communication devices has made social media highly accessible. The content on these media reflects a user's day-to-day activities. This includes content created under the influence of alcohol. In popular culture, this has been referred to as `drunk-texting'. In this paper, we introduce automatic `drunk-texting prediction' as a computational task. Given a tweet, the goal is to automatically identify if it was written by a drunk user. We refer to tweets written under the influence of alcohol as `drunk tweets', and the opposite as `sober tweets'."
],
"highlighted_evidence": [
"In this paper, we introduce automatic `drunk-texting prediction' as a computational task. Given a tweet, the goal is to automatically identify if it was written by a drunk user. "
]
}
],
"annotation_id": [
"19cbce0e0847cb0c02eed760d2bbe3d0eb3caee1"
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
]
} | {
"caption": [
"Figure 1: Word cloud for drunk tweets",
"Table 1: Our Feature Set for Drunk-texting Prediction",
"Table 2: Performance of our features on Datasets 1 and 2",
"Table 4: Cohen’s Kappa for three annotators (A1A3)",
"Table 3: Top stylistic features for Datasets 1 and 2 obtained using Chi-squared test-based ranking",
"Table 5: Performance of human evaluators and our classifiers (trained on all features), for Dataset-H as the test set"
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"3-Table2-1.png",
"4-Table4-1.png",
"4-Table3-1.png",
"4-Table5-1.png"
]
} |
1704.05572 | Answering Complex Questions Using Open Information Extraction | While there has been substantial progress in factoid question-answering (QA), answering complex questions remains challenging, typically requiring both a large body of knowledge and inference techniques. Open Information Extraction (Open IE) provides a way to generate semi-structured knowledge for QA, but to date such knowledge has only been used to answer simple questions with retrieval-based methods. We overcome this limitation by presenting a method for reasoning with Open IE knowledge, allowing more complex questions to be handled. Using a recently proposed support graph optimization framework for QA, we develop a new inference model for Open IE, in particular one that can work effectively with multiple short facts, noise, and the relational structure of tuples. Our model significantly outperforms a state-of-the-art structured solver on complex questions of varying difficulty, while also removing the reliance on manually curated knowledge. | {
"section_name": [
"Introduction",
"Related Work",
"Tuple Inference Solver",
"Tuple KB",
"Tuple Selection",
"Support Graph Search",
"Experiments",
"Results",
"Error Analysis",
"Conclusion",
"Appendix: ILP Model Details",
"Experiment Details",
"Using curated tables with TupleInf",
"Using Open IE tuples with TableILP"
],
"paragraphs": [
[
"Effective question answering (QA) systems have been a long-standing quest of AI research. Structured curated KBs have been used successfully for this task BIBREF0 , BIBREF1 . However, these KBs are expensive to build and typically domain-specific. Automatically constructed open vocabulary (subject; predicate; object) style tuples have broader coverage, but have only been used for simple questions where a single tuple suffices BIBREF2 , BIBREF3 .",
"Our goal in this work is to develop a QA system that can perform reasoning with Open IE BIBREF4 tuples for complex multiple-choice questions that require tuples from multiple sentences. Such a system can answer complex questions in resource-poor domains where curated knowledge is unavailable. Elementary-level science exams is one such domain, requiring complex reasoning BIBREF5 . Due to the lack of a large-scale structured KB, state-of-the-art systems for this task either rely on shallow reasoning with large text corpora BIBREF6 , BIBREF7 or deeper, structured reasoning with a small amount of automatically acquired BIBREF8 or manually curated BIBREF9 knowledge.",
"Consider the following question from an Alaska state 4th grade science test:",
"Which object in our solar system reflects light and is a satellite that orbits around one planet? (A) Earth (B) Mercury (C) the Sun (D) the Moon",
"This question is challenging for QA systems because of its complex structure and the need for multi-fact reasoning. A natural way to answer it is by combining facts such as (Moon; is; in the solar system), (Moon; reflects; light), (Moon; is; satellite), and (Moon; orbits; around one planet).",
"A candidate system for such reasoning, and which we draw inspiration from, is the TableILP system of BIBREF9 . TableILP treats QA as a search for an optimal subgraph that connects terms in the question and answer via rows in a set of curated tables, and solves the optimization problem using Integer Linear Programming (ILP). We similarly want to search for an optimal subgraph. However, a large, automatically extracted tuple KB makes the reasoning context different on three fronts: (a) unlike reasoning with tables, chaining tuples is less important and reliable as join rules aren't available; (b) conjunctive evidence becomes paramount, as, unlike a long table row, a single tuple is less likely to cover the entire question; and (c) again, unlike table rows, tuples are noisy, making combining redundant evidence essential. Consequently, a table-knowledge centered inference model isn't the best fit for noisy tuples.",
"To address this challenge, we present a new ILP-based model of inference with tuples, implemented in a reasoner called TupleInf. We demonstrate that TupleInf significantly outperforms TableILP by 11.8% on a broad set of over 1,300 science questions, without requiring manually curated tables, using a substantially simpler ILP formulation, and generalizing well to higher grade levels. The gains persist even when both solvers are provided identical knowledge. This demonstrates for the first time how Open IE based QA can be extended from simple lookup questions to an effective system for complex questions."
],
[
"We discuss two classes of related work: retrieval-based web question-answering (simple reasoning with large scale KB) and science question-answering (complex reasoning with small KB)."
],
[
"We first describe the tuples used by our solver. We define a tuple as (subject; predicate; objects) with zero or more objects. We refer to the subject, predicate, and objects as the fields of the tuple."
],
[
"We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T)."
],
[
"Given a multiple-choice question $qa$ with question text $q$ and answer choices A= $\\lbrace a_i\\rbrace $ , we select the most relevant tuples from $T$ and $S$ as follows.",
"Selecting from Tuple KB: We use an inverted index to find the 1,000 tuples that have the most overlapping tokens with question tokens $tok(qa).$ . We also filter out any tuples that overlap only with $tok(q)$ as they do not support any answer. We compute the normalized TF-IDF score treating the question, $q$ as a query and each tuple, $t$ as a document: $\n&\\textit {tf}(x, q)=1\\; \\textmd {if x} \\in q ; \\textit {idf}(x) = log(1 + N/n_x) \\\\\n&\\textit {tf-idf}(t, q)=\\sum _{x \\in t\\cap q} idf(x)\n$ ",
" where $N$ is the number of tuples in the KB and $n_x$ are the number of tuples containing $x$ . We normalize the tf-idf score by the number of tokens in $t$ and $q$ . We finally take the 50 top-scoring tuples $T_{qa}$ .",
"On-the-fly tuples from text: To handle questions from new domains not covered by the training set, we extract additional tuples on the fly from S (similar to BIBREF17 knowlhunting). We perform the same ElasticSearch query described earlier for building T. We ignore sentences that cover none or all answer choices as they are not discriminative. We also ignore long sentences ( $>$ 300 characters) and sentences with negation as they tend to lead to noisy inference. We then run Open IE on these sentences and re-score the resulting tuples using the Jaccard score due to the lossy nature of Open IE, and finally take the 50 top-scoring tuples $T^{\\prime }_{qa}$ ."
],
[
"Similar to TableILP, we view the QA task as searching for a graph that best connects the terms in the question (qterms) with an answer choice via the knowledge; see Figure 1 for a simple illustrative example. Unlike standard alignment models used for tasks such as Recognizing Textual Entailment (RTE) BIBREF18 , however, we must score alignments between a set $T_{qa} \\cup T^{\\prime }_{qa}$ of structured tuples and a (potentially multi-sentence) multiple-choice question $qa$ .",
"The qterms, answer choices, and tuples fields form the set of possible vertices, $\\mathcal {V}$ , of the support graph. Edges connecting qterms to tuple fields and tuple fields to answer choices form the set of possible edges, $\\mathcal {E}$ . The support graph, $G(V, E)$ , is a subgraph of $\\mathcal {G}(\\mathcal {V}, \\mathcal {E})$ where $V$ and $E$ denote “active” nodes and edges, resp. We define the desired behavior of an optimal support graph via an ILP model as follows.",
"Similar to TableILP, we score the support graph based on the weight of the active nodes and edges. Each edge $e(t, h)$ is weighted based on a word-overlap score. While TableILP used WordNet BIBREF19 paths to compute the weight, this measure results in unreliable scores when faced with longer phrases found in Open IE tuples.",
"Compared to a curated KB, it is easy to find Open IE tuples that match irrelevant parts of the questions. To mitigate this issue, we improve the scoring of qterms in our ILP objective to focus on important terms. Since the later terms in a question tend to provide the most critical information, we scale qterm coefficients based on their position. Also, qterms that appear in almost all of the selected tuples tend not to be discriminative as any tuple would support such a qterm. Hence we scale the coefficients by the inverse frequency of the tokens in the selected tuples.",
"Since Open IE tuples do not come with schema and join rules, we can define a substantially simpler model compared to TableILP. This reduces the reasoning capability but also eliminates the reliance on hand-authored join rules and regular expressions used in TableILP. We discovered (see empirical evaluation) that this simple model can achieve the same score as TableILP on the Regents test (target test set used by TableILP) and generalizes better to different grade levels.",
"We define active vertices and edges using ILP constraints: an active edge must connect two active vertices and an active vertex must have at least one active edge. To avoid positive edge coefficients in the objective function resulting in spurious edges in the support graph, we limit the number of active edges from an active tuple, question choice, tuple fields, and qterms (first group of constraints in Table 1 ). Our model is also capable of using multiple tuples to support different parts of the question as illustrated in Figure 1 . To avoid spurious tuples that only connect with the question (or choice) or ignore the relation being expressed in the tuple, we add constraints that require each tuple to connect a qterm with an answer choice (second group of constraints in Table 1 ).",
"We also define new constraints based on the Open IE tuple structure. Since an Open IE tuple expresses a fact about the tuple's subject, we require the subject to be active in the support graph. To avoid issues such as (Planet; orbit; Sun) matching the sample question in the introduction (“Which object $\\ldots $ orbits around a planet”), we also add an ordering constraint (third group in Table 1 ).",
"Its worth mentioning that TupleInf only combines parallel evidence i.e. each tuple must connect words in the question to the answer choice. For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work."
],
[
"Comparing our method with two state-of-the-art systems for 4th and 8th grade science exams, we demonstrate that (a) TupleInf with only automatically extracted tuples significantly outperforms TableILP with its original curated knowledge as well as with additional tuples, and (b) TupleInf's complementary approach to IR leads to an improved ensemble. Numbers in bold indicate statistical significance based on the Binomial exact test BIBREF20 at $p=0.05$ .",
"We consider two question sets. (1) 4th Grade set (1220 train, 1304 test) is a 10x larger superset of the NY Regents questions BIBREF6 , and includes professionally written licensed questions. (2) 8th Grade set (293 train, 282 test) contains 8th grade questions from various states.",
"We consider two knowledge sources. The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. This corpus is used by the IR solver and also used to create the tuple KB T and on-the-fly tuples $T^{\\prime }_{qa}$ . Additionally, TableILP uses $\\sim $ 70 Curated tables (C) designed for 4th grade NY Regents exams.",
"We compare TupleInf with two state-of-the-art baselines. IR is a simple yet powerful information-retrieval baseline BIBREF6 that selects the answer option with the best matching sentence in a corpus. TableILP is the state-of-the-art structured inference baseline BIBREF9 developed for science questions."
],
[
"Table 2 shows that TupleInf, with no curated knowledge, outperforms TableILP on both question sets by more than 11%. The lower half of the table shows that even when both solvers are given the same knowledge (C+T), the improved selection and simplified model of TupleInf results in a statistically significant improvement. Our simple model, TupleInf(C + T), also achieves scores comparable to TableILP on the latter's target Regents questions (61.4% vs TableILP's reported 61.5%) without any specialized rules.",
"Table 3 shows that while TupleInf achieves similar scores as the IR solver, the approaches are complementary (structured lossy knowledge reasoning vs. lossless sentence retrieval). The two solvers, in fact, differ on 47.3% of the training questions. To exploit this complementarity, we train an ensemble system BIBREF6 which, as shown in the table, provides a substantial boost over the individual solvers. Further, IR + TupleInf is consistently better than IR + TableILP. Finally, in combination with IR and the statistical association based PMI solver (that scores 54.1% by itself) of BIBREF6 aristo2016:combining, TupleInf achieves a score of 58.2% as compared to TableILP's ensemble score of 56.7% on the 4th grade set, again attesting to TupleInf's strength."
],
[
"We describe four classes of failures that we observed, and the future work they suggest.",
"Missing Important Words: Which material will spread out to completely fill a larger container? (A)air (B)ice (C)sand (D)water",
"In this question, we have tuples that support water will spread out and fill a larger container but miss the critical word “completely”. An approach capable of detecting salient question words could help avoid that.",
"Lossy IE: Which action is the best method to separate a mixture of salt and water? ...",
"The IR solver correctly answers this question by using the sentence: Separate the salt and water mixture by evaporating the water. However, TupleInf is not able to answer this question as Open IE is unable to extract tuples from this imperative sentence. While the additional structure from Open IE is useful for more robust matching, converting sentences to Open IE tuples may lose important bits of information.",
"Bad Alignment: Which of the following gases is necessary for humans to breathe in order to live?(A) Oxygen(B) Carbon dioxide(C) Helium(D) Water vapor",
"TupleInf returns “Carbon dioxide” as the answer because of the tuple (humans; breathe out; carbon dioxide). The chunk “to breathe” in the question has a high alignment score to the “breathe out” relation in the tuple even though they have completely different meanings. Improving the phrase alignment can mitigate this issue.",
"Out of scope: Deer live in forest for shelter. If the forest was cut down, which situation would most likely happen?...",
"Such questions that require modeling a state presented in the question and reasoning over the state are out of scope of our solver."
],
[
"We presented a new QA system, TupleInf, that can reason over a large, potentially noisy tuple KB to answer complex questions. Our results show that TupleInf is a new state-of-the-art structured solver for elementary-level science that does not rely on curated knowledge and generalizes to higher grades. Errors due to lossy IE and misalignments suggest future work in incorporating context and distributional measures."
],
[
"To build the ILP model, we first need to get the questions terms (qterm) from the question by chunking the question using an in-house chunker based on the postagger from FACTORIE. "
],
[
"We use the SCIP ILP optimization engine BIBREF21 to optimize our ILP model. To get the score for each answer choice $a_i$ , we force the active variable for that choice $x_{a_i}$ to be one and use the objective function value of the ILP model as the score. For evaluations, we use a 2-core 2.5 GHz Amazon EC2 linux machine with 16 GB RAM. To evaluate TableILP and TupleInf on curated tables and tuples, we converted them into the expected format of each solver as follows."
],
[
"For each question, we select the 7 best matching tables using the tf-idf score of the table w.r.t. the question tokens and top 20 rows from each table using the Jaccard similarity of the row with the question. (same as BIBREF9 tableilp2016). We then convert the table rows into the tuple structure using the relations defined by TableILP. For every pair of cells connected by a relation, we create a tuple with the two cells as the subject and primary object with the relation as the predicate. The other cells of the table are used as additional objects to provide context to the solver. We pick top-scoring 50 tuples using the Jaccard score."
],
[
"We create an additional table in TableILP with all the tuples in $T$ . Since TableILP uses fixed-length $(subject; predicate; object)$ triples, we need to map tuples with multiple objects to this format. For each object, $O_i$ in the input Open IE tuple $(S; P; O_1; O_2 \\ldots )$ , we add a triple $(S; P; O_i)$ to this table."
]
]
} | {
"question": [
"What corpus was the source of the OpenIE extractions?",
"What is the accuracy of the proposed technique?",
"Is an entity linking process used?",
"Are the OpenIE extractions all triples?",
"What method was used to generate the OpenIE extractions?",
"Can the method answer multi-hop questions?",
"What was the textual source to which OpenIE was applied?",
"What OpenIE method was used to generate the extractions?",
"Is their method capable of multi-hop reasoning?"
],
"question_id": [
"2fffff59e57b8dbcaefb437a6b3434fc137f813b",
"eb95af36347ed0e0808e19963fe4d058e2ce3c9f",
"cd1792929b9fa5dd5b1df0ae06fc6aece4c97424",
"65d34041ffa4564385361979a08706b10b92ebc7",
"e215fa142102f7f9eeda9c9eb8d2aeff7f2a33ed",
"a8545f145d5ea2202cb321c8f93e75ad26fcf4aa",
"417dabd43d6266044d38ed88dbcb5fdd7a426b22",
"fed230cef7c130f6040fb04304a33bbc17ca3a36",
"7917d44e952b58ea066dc0b485d605c9a1fe3dda"
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five",
"five",
"five",
"five"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat",
"somewhat",
"somewhat",
"somewhat",
"no",
"no",
"no"
],
"search_query": [
"semi-structured",
"semi-structured",
"semi-structured",
"semi-structured",
"semi-structured",
"semi-structured",
"information extraction",
"information extraction",
"information extraction"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T).",
"We consider two knowledge sources. The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. This corpus is used by the IR solver and also used to create the tuple KB T and on-the-fly tuples $T^{\\prime }_{qa}$ . Additionally, TableILP uses $\\sim $ 70 Curated tables (C) designed for 4th grade NY Regents exams."
],
"highlighted_evidence": [
"We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. ",
"Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T).",
"The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. "
]
},
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"3dc26c840c9d93a07e7cfd50dae2ec9e454e39e4",
"b66d581a485f807a457f36777a1ab22dbf849998"
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b",
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "51.7 and 51.6 on 4th and 8th grade question sets with no curated knowledge. 47.5 and 48.0 on 4th and 8th grade question sets when both solvers are given the same knowledge",
"evidence": [
"FLOAT SELECTED: Table 2: TUPLEINF is significantly better at structured reasoning than TABLEILP.9"
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: TUPLEINF is significantly better at structured reasoning than TABLEILP.9"
]
}
],
"annotation_id": [
"a6c4425bc88c8d30a2aa9a7a2a791025314fadef"
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"Given a multiple-choice question $qa$ with question text $q$ and answer choices A= $\\lbrace a_i\\rbrace $ , we select the most relevant tuples from $T$ and $S$ as follows.",
"Selecting from Tuple KB: We use an inverted index to find the 1,000 tuples that have the most overlapping tokens with question tokens $tok(qa).$ . We also filter out any tuples that overlap only with $tok(q)$ as they do not support any answer. We compute the normalized TF-IDF score treating the question, $q$ as a query and each tuple, $t$ as a document: $ &\\textit {tf}(x, q)=1\\; \\textmd {if x} \\in q ; \\textit {idf}(x) = log(1 + N/n_x) \\\\ &\\textit {tf-idf}(t, q)=\\sum _{x \\in t\\cap q} idf(x) $"
],
"highlighted_evidence": [
"Given a multiple-choice question $qa$ with question text $q$ and answer choices A= $\\lbrace a_i\\rbrace $ , we select the most relevant tuples from $T$ and $S$ as follows.",
"Selecting from Tuple KB: We use an inverted index to find the 1,000 tuples that have the most overlapping tokens with question tokens $tok(qa).$ ."
]
}
],
"annotation_id": [
"ab7b691d0d2b23ca9201a02c67ca98202f0e2067"
],
"worker_id": [
"f840a836eee0180d2c976457f8b3052d8e78050c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"We create an additional table in TableILP with all the tuples in $T$ . Since TableILP uses fixed-length $(subject; predicate; object)$ triples, we need to map tuples with multiple objects to this format. For each object, $O_i$ in the input Open IE tuple $(S; P; O_1; O_2 \\ldots )$ , we add a triple $(S; P; O_i)$ to this table."
],
"highlighted_evidence": [
"For each object, $O_i$ in the input Open IE tuple $(S; P; O_1; O_2 \\ldots )$ , we add a triple $(S; P; O_i)$ to this table."
]
}
],
"annotation_id": [
"34d51905bd8bea5030d4b5e1095cac2ab2266afe"
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S",
"take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T)."
],
"highlighted_evidence": [
"For each test set, we use the corresponding training questions $Q_\\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T)."
]
}
],
"annotation_id": [
"6413e76a47bda832eb45a35af9100d6ae8db32cc"
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"Its worth mentioning that TupleInf only combines parallel evidence i.e. each tuple must connect words in the question to the answer choice. For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work."
],
"highlighted_evidence": [
"For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work."
]
}
],
"annotation_id": [
"14ed4878b0c2a3d3def83d2973038ed102fbdd63"
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We consider two knowledge sources. The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. This corpus is used by the IR solver and also used to create the tuple KB T and on-the-fly tuples $T^{\\prime }_{qa}$ . Additionally, TableILP uses $\\sim $ 70 Curated tables (C) designed for 4th grade NY Regents exams.",
"We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T)."
],
"highlighted_evidence": [
"The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining.",
"We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. ",
"We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T)."
]
}
],
"annotation_id": [
"fc0aef9fb401b68ee551d7e92fde4f03903c31d9"
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S",
"take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T)."
],
"highlighted_evidence": [
"Specifically, for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$ to create the tuple KB (T)."
]
}
],
"annotation_id": [
"f3306bb0b0a58fcbca6b4227c4126e8923213e0f"
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"Its worth mentioning that TupleInf only combines parallel evidence i.e. each tuple must connect words in the question to the answer choice. For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work."
],
"highlighted_evidence": [
"For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work."
]
}
],
"annotation_id": [
"0014dfeeb1ed23852c5301f81e02d1710a9c8c78"
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
}
]
} | {
"caption": [
"Figure 1: An example support graph linking a question (top), two tuples from the KB (colored) and an answer option (nitrogen).",
"Table 2: TUPLEINF is significantly better at structured reasoning than TABLEILP.9",
"Table 1: High-level ILP constraints; we report results for ~w = (2, 4, 4, 4, 2); the model can be improved with more careful parameter selection",
"Table 3: TUPLEINF is complementarity to IR, resulting in a strong ensemble"
],
"file": [
"3-Figure1-1.png",
"4-Table2-1.png",
"4-Table1-1.png",
"5-Table3-1.png"
]
} |
1804.10686 | An Unsupervised Word Sense Disambiguation System for Under-Resourced Languages | In this paper, we present Watasense, an unsupervised system for word sense disambiguation. Given a sentence, the system chooses the most relevant sense of each input word with respect to the semantic similarity between the given sentence and the synset constituting the sense of the target word. Watasense has two modes of operation. The sparse mode uses the traditional vector space model to estimate the most similar word sense corresponding to its context. The dense mode, instead, uses synset embeddings to cope with the sparsity problem. We describe the architecture of the present system and also conduct its evaluation on three different lexical semantic resources for Russian. We found that the dense mode substantially outperforms the sparse one on all datasets according to the adjusted Rand index. | {
"section_name": [
"Introduction",
"Related Work",
"Watasense, an Unsupervised System for Word Sense Disambiguation",
"System Architecture",
"User Interface",
"Word Sense Disambiguation",
"Evaluation",
"Quality Measure",
"Dataset",
"Results",
"Conclusion",
"Acknowledgements"
],
"paragraphs": [
[
"Word sense disambiguation (WSD) is a natural language processing task of identifying the particular word senses of polysemous words used in a sentence. Recently, a lot of attention was paid to the problem of WSD for the Russian language BIBREF0 , BIBREF1 , BIBREF2 . This problem is especially difficult because of both linguistic issues – namely, the rich morphology of Russian and other Slavic languages in general – and technical challenges like the lack of software and language resources required for addressing the problem.",
"To address these issues, we present Watasense, an unsupervised system for word sense disambiguation. We describe its architecture and conduct an evaluation on three datasets for Russian. The choice of an unsupervised system is motivated by the absence of resources that would enable a supervised system for under-resourced languages. Watasense is not strictly tied to the Russian language and can be applied to any language for which a tokenizer, part-of-speech tagger, lemmatizer, and a sense inventory are available.",
"The rest of the paper is organized as follows. Section 2 reviews related work. Section 3 presents the Watasense word sense disambiguation system, presents its architecture, and describes the unsupervised word sense disambiguation methods bundled with it. Section 4 evaluates the system on a gold standard for Russian. Section 5 concludes with final remarks."
],
[
"Although the problem of WSD has been addressed in many SemEval campaigns BIBREF3 , BIBREF4 , BIBREF5 , we focus here on word sense disambiguation systems rather than on the research methodologies.",
"Among the freely available systems, IMS (“It Makes Sense”) is a supervised WSD system designed initially for the English language BIBREF6 . The system uses a support vector machine classifier to infer the particular sense of a word in the sentence given its contextual sentence-level features. Pywsd is an implementation of several popular WSD algorithms implemented in a library for the Python programming language. It offers both the classical Lesk algorithm for WSD and path-based algorithms that heavily use the WordNet and similar lexical ontologies. DKPro WSD BIBREF7 is a general-purpose framework for WSD that uses a lexical ontology as the sense inventory and offers the variety of WordNet-based algorithms. Babelfy BIBREF8 is a WSD system that uses BabelNet, a large-scale multilingual lexical ontology available for most natural languages. Due to the broad coverage of BabelNet, Babelfy offers entity linking as part of the WSD functionality.",
"Panchenko:17:emnlp present an unsupervised WSD system that is also knowledge-free: its sense inventory is induced based on the JoBimText framework, and disambiguation is performed by computing the semantic similarity between the context and the candidate senses BIBREF9 . Pelevina:16 proposed a similar approach to WSD, but based on dense vector representations (word embeddings), called SenseGram. Similarly to SenseGram, our WSD system is based on averaging of word embeddings on the basis of an automatically induced sense inventory. A crucial difference, however, is that we induce our sense inventory from synonymy dictionaries and not distributional word vectors. While this requires more manually created resources, a potential advantage of our approach is that the resulting inventory contains less noise."
],
[
"Watasense is implemented in the Python programming language using the scikit-learn BIBREF10 and Gensim BIBREF11 libraries. Watasense offers a Web interface (Figure FIGREF2 ), a command-line tool, and an application programming interface (API) for deployment within other applications."
],
[
"A sentence is represented as a list of spans. A span is a quadruple: INLINEFORM0 , where INLINEFORM1 is the word or the token, INLINEFORM2 is the part of speech tag, INLINEFORM3 is the lemma, INLINEFORM4 is the position of the word in the sentence. These data are provided by tokenizer, part-of-speech tagger, and lemmatizer that are specific for the given language. The WSD results are represented as a map of spans to the corresponding word sense identifiers.",
"The sense inventory is a list of synsets. A synset is represented by three bag of words: the synonyms, the hypernyms, and the union of two former – the bag. Due to the performance reasons, on initialization, an inverted index is constructed to map a word to the set of synsets it is included into.",
"Each word sense disambiguation method extends the BaseWSD class. This class provides the end user with a generic interface for WSD and also encapsulates common routines for data pre-processing. The inherited classes like SparseWSD and DenseWSD should implement the disambiguate_word(...) method that disambiguates the given word in the given sentence. Both classes use the bag representation of synsets on the initialization. As the result, for WSD, not just the synonyms are used, but also the hypernyms corresponding to the synsets. The UML class diagram is presented in Figure FIGREF4 .",
"Watasense supports two sources of word vectors: it can either read the word vector dataset in the binary Word2Vec format or use Word2Vec-Pyro4, a general-purpose word vector server. The use of a remote word vector server is recommended due to the reduction of memory footprint per each Watasense process."
],
[
" FIGREF2 shows the Web interface of Watasense. It is composed of two primary activities. The first is the text input and the method selection ( FIGREF2 ). The second is the display of the disambiguation results with part of speech highlighting ( FIGREF7 ). Those words with resolved polysemy are underlined; the tooltips with the details are raised on hover."
],
[
"We use two different unsupervised approaches for word sense disambiguation. The first, called `sparse model', uses a straightforward sparse vector space model, as widely used in Information Retrieval, to represent contexts and synsets. The second, called `dense model', represents synsets and contexts in a dense, low-dimensional space by averaging word embeddings.",
"In the vector space model approach, we follow the sparse context-based disambiguated method BIBREF12 , BIBREF13 . For estimating the sense of the word INLINEFORM0 in a sentence, we search for such a synset INLINEFORM1 that maximizes the cosine similarity to the sentence vector: DISPLAYFORM0 ",
"where INLINEFORM0 is the set of words forming the synset, INLINEFORM1 is the set of words forming the sentence. On initialization, the synsets represented in the sense inventory are transformed into the INLINEFORM2 -weighted word-synset sparse matrix efficiently represented in the memory using the compressed sparse row format. Given a sentence, a similar transformation is done to obtain the sparse vector representation of the sentence in the same space as the word-synset matrix. Then, for each word to disambiguate, we retrieve the synset containing this word that maximizes the cosine similarity between the sparse sentence vector and the sparse synset vector. Let INLINEFORM3 be the maximal number of synsets containing a word and INLINEFORM4 be the maximal size of a synset. Therefore, disambiguation of the whole sentence INLINEFORM5 requires INLINEFORM6 operations using the efficient sparse matrix representation.",
"In the synset embeddings model approach, we follow SenseGram BIBREF14 and apply it to the synsets induced from a graph of synonyms. We transform every synset into its dense vector representation by averaging the word embeddings corresponding to each constituent word: DISPLAYFORM0 ",
"where INLINEFORM0 denotes the word embedding of INLINEFORM1 . We do the same transformation for the sentence vectors. Then, given a word INLINEFORM2 , a sentence INLINEFORM3 , we find the synset INLINEFORM4 that maximizes the cosine similarity to the sentence: DISPLAYFORM0 ",
"On initialization, we pre-compute the dense synset vectors by averaging the corresponding word embeddings. Given a sentence, we similarly compute the dense sentence vector by averaging the vectors of the words belonging to non-auxiliary parts of speech, i.e., nouns, adjectives, adverbs, verbs, etc. Then, given a word to disambiguate, we retrieve the synset that maximizes the cosine similarity between the dense sentence vector and the dense synset vector. Thus, given the number of dimensions INLINEFORM0 , disambiguation of the whole sentence INLINEFORM1 requires INLINEFORM2 operations."
],
[
"We conduct our experiments using the evaluation methodology of SemEval 2010 Task 14: Word Sense Induction & Disambiguation BIBREF5 . In the gold standard, each word is provided with a set of instances, i.e., the sentences containing the word. Each instance is manually annotated with the single sense identifier according to a pre-defined sense inventory. Each participating system estimates the sense labels for these ambiguous words, which can be viewed as a clustering of instances, according to sense labels. The system's clustering is compared to the gold-standard clustering for evaluation."
],
[
"The original SemEval 2010 Task 14 used the V-Measure external clustering measure BIBREF5 . However, this measure is maximized by clustering each sentence into his own distinct cluster, i.e., a `dummy' singleton baseline. This is achieved by the system deciding that every ambiguous word in every sentence corresponds to a different word sense. To cope with this issue, we follow a similar study BIBREF1 and use instead of the adjusted Rand index (ARI) proposed by Hubert:85 as an evaluation measure.",
"In order to provide the overall value of ARI, we follow the addition approach used in BIBREF1 . Since the quality measure is computed for each lemma individually, the total value is a weighted sum, namely DISPLAYFORM0 ",
"where INLINEFORM0 is the lemma, INLINEFORM1 is the set of the instances for the lemma INLINEFORM2 , INLINEFORM3 is the adjusted Rand index computed for the lemma INLINEFORM4 . Thus, the contribution of each lemma to the total score is proportional to the number of instances of this lemma."
],
[
"We evaluate the word sense disambiguation methods in Watasense against three baselines: an unsupervised approach for learning multi-prototype word embeddings called AdaGram BIBREF15 , same sense for all the instances per lemma (One), and one sense per instance (Singletons). The AdaGram model is trained on the combination of RuWac, Lib.Ru, and the Russian Wikipedia with the overall vocabulary size of 2 billion tokens BIBREF1 .",
"As the gold-standard dataset, we use the WSD training dataset for Russian created during RUSSE'2018: A Shared Task on Word Sense Induction and Disambiguation for the Russian Language BIBREF16 . The dataset has 31 words covered by INLINEFORM0 instances in the bts-rnc subset and 5 words covered by 439 instances in the wiki-wiki subset.",
"The following different sense inventories have been used during the evaluation:",
"[leftmargin=4mm]",
"Watlink, a word sense network constructed automatically. It uses the synsets induced in an unsupervised way by the Watset[CWnolog, MCL] method BIBREF2 and the semantic relations from such dictionaries as Wiktionary referred as Joint INLINEFORM0 Exp INLINEFORM1 SWN in Ustalov:17:dialogue. This is the only automatically built inventory we use in the evaluation.",
"RuThes, a large-scale lexical ontology for Russian created by a group of expert lexicographers BIBREF17 .",
"RuWordNet, a semi-automatic conversion of the RuThes lexical ontology into a WordNet-like structure BIBREF18 .",
"Since the Dense model requires word embeddings, we used the 500-dimensional word vectors from the Russian Distributional Thesaurus BIBREF19 . These vectors are obtained using the Skip-gram approach trained on the lib.rus.ec text corpus."
],
[
"We compare the evaluation results obtained for the Sparse and Dense approaches with three baselines: the AdaGram model (AdaGram), the same sense for all the instances per lemma (One) and one sense per instance (Singletons). The evaluation results are presented in Table TABREF25 . The columns bts-rnc and wiki-wiki represent the overall value of ARI according to Equation ( EQREF15 ). The column Avg. consists of the weighted average of the datasets w.r.t. the number of instances.",
"We observe that the SenseGram-based approach for word sense disambiguation yields substantially better results in every case (Table TABREF25 ). The primary reason for that is the implicit handling of similar words due to the averaging of dense word vectors for semantically related words. Thus, we recommend using the dense approach in further studies. Although the AdaGram approach trained on a large text corpus showed better results according to the weighted average, this result does not transfer to languages with less available corpus size."
],
[
"In this paper, we presented Watasense, an open source unsupervised word sense disambiguation system that is parameterized only by a word sense inventory. It supports both sparse and dense sense representations. We were able to show that the dense approach substantially boosts the performance of the sparse approach on three different sense inventories for Russian. We recommend using the dense approach in further studies due to its smoothing capabilities that reduce sparseness. In further studies, we will look at the problem of phrase neighbors that influence the sentence vector representations.",
"Finally, we would like to emphasize the fact that Watasense has a simple API for integrating different algorithms for WSD. At the same time, it requires only a basic set of language processing tools to be available: tokenizer, a part-of-speech tagger, lemmatizer, and a sense inventory, which means that low-resourced language can benefit of its usage."
],
[
"We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) under the project “Joining Ontologies and Semantics Induced from Text” (JOIN-T), the RFBR under the projects no. 16-37-00203 mol_a and no. 16-37-00354 mol_a, and the RFH under the project no. 16-04-12019. The research was supported by the Ministry of Education and Science of the Russian Federation Agreement no. 02.A03.21.0006. The calculations were carried out using the supercomputer “Uran” at the Krasovskii Institute of Mathematics and Mechanics."
]
]
} | {
"question": [
"Do the authors offer any hypothesis about why the dense mode outperformed the sparse one?",
"What evaluation is conducted?",
"Which corpus of synsets are used?",
"What measure of semantic similarity is used?"
],
"question_id": [
"7d5ba230522df1890619dedcfb310160958223c1",
"a48cc6d3d322a7b159ff40ec162a541bf74321eb",
"2bc0bb7d3688fdd2267c582ca593e2ce72718a91",
"8c073b7ea8cb5cc54d7fecb8f4bf88c1fb621b19"
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"topic_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"We use two different unsupervised approaches for word sense disambiguation. The first, called `sparse model', uses a straightforward sparse vector space model, as widely used in Information Retrieval, to represent contexts and synsets. The second, called `dense model', represents synsets and contexts in a dense, low-dimensional space by averaging word embeddings.",
"In the synset embeddings model approach, we follow SenseGram BIBREF14 and apply it to the synsets induced from a graph of synonyms. We transform every synset into its dense vector representation by averaging the word embeddings corresponding to each constituent word: DISPLAYFORM0",
"We observe that the SenseGram-based approach for word sense disambiguation yields substantially better results in every case (Table TABREF25 ). The primary reason for that is the implicit handling of similar words due to the averaging of dense word vectors for semantically related words. Thus, we recommend using the dense approach in further studies. Although the AdaGram approach trained on a large text corpus showed better results according to the weighted average, this result does not transfer to languages with less available corpus size."
],
"highlighted_evidence": [
"We use two different unsupervised approaches for word sense disambiguation. ",
"The second, called `dense model', represents synsets and contexts in a dense, low-dimensional space by averaging word embeddings.",
"In the synset embeddings model approach, we follow SenseGram BIBREF14 and apply it to the synsets induced from a graph of synonyms. ",
"We observe that the SenseGram-based approach for word sense disambiguation yields substantially better results in every case (Table TABREF25 ). The primary reason for that is the implicit handling of similar words due to the averaging of dense word vectors for semantically related words. "
]
}
],
"annotation_id": [
"824cf5ac42e96c2c59833eafb3da5ffe311a9996"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Word Sense Induction & Disambiguation"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We conduct our experiments using the evaluation methodology of SemEval 2010 Task 14: Word Sense Induction & Disambiguation BIBREF5 . In the gold standard, each word is provided with a set of instances, i.e., the sentences containing the word. Each instance is manually annotated with the single sense identifier according to a pre-defined sense inventory. Each participating system estimates the sense labels for these ambiguous words, which can be viewed as a clustering of instances, according to sense labels. The system's clustering is compared to the gold-standard clustering for evaluation."
],
"highlighted_evidence": [
"We conduct our experiments using the evaluation methodology of SemEval 2010 Task 14: Word Sense Induction & Disambiguation BIBREF5 . "
]
}
],
"annotation_id": [
"56bd23dd4ad8b609544076a6a0cd1e2e0a13a971"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Wiktionary"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The following different sense inventories have been used during the evaluation:",
"Watlink, a word sense network constructed automatically. It uses the synsets induced in an unsupervised way by the Watset[CWnolog, MCL] method BIBREF2 and the semantic relations from such dictionaries as Wiktionary referred as Joint INLINEFORM0 Exp INLINEFORM1 SWN in Ustalov:17:dialogue. This is the only automatically built inventory we use in the evaluation."
],
"highlighted_evidence": [
"The following different sense inventories have been used during the evaluation:",
"Watlink, a word sense network constructed automatically. It uses the synsets induced in an unsupervised way by the Watset[CWnolog, MCL] method BIBREF2 and the semantic relations from such dictionaries as Wiktionary referred as Joint INLINEFORM0 Exp INLINEFORM1 SWN in Ustalov:17:dialogue."
]
}
],
"annotation_id": [
"a7a5673c63c143f2860570a169528838b3bc463b"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"cosine similarity"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In the vector space model approach, we follow the sparse context-based disambiguated method BIBREF12 , BIBREF13 . For estimating the sense of the word INLINEFORM0 in a sentence, we search for such a synset INLINEFORM1 that maximizes the cosine similarity to the sentence vector: DISPLAYFORM0"
],
"highlighted_evidence": [
"For estimating the sense of the word INLINEFORM0 in a sentence, we search for such a synset INLINEFORM1 that maximizes the cosine similarity to the sentence vector: DISPLAYFORM0"
]
}
],
"annotation_id": [
"00053db53c993f326c28c603488445aa21bac603"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
]
} | {
"caption": [
"Figure 1: A snapshot of the online demo, which is available at http://watasense.nlpub.org/ (in Russian).",
"Figure 2: The UML class diagram of Watasense.",
"Figure 3: The word sense disambiguation results with the word “experiments” selected. The tooltip shows its lemma “experiment”, the synset identifier (36055), and the words forming the synset “experiment”, “experimenting” as well as its hypernyms “attempt”, “reproduction”, “research”, “method”.",
"Table 1: Results on RUSSE’2018 (Adjusted Rand Index)."
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"3-Figure3-1.png",
"4-Table1-1.png"
]
} |
1707.03904 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. The Quasar-S dataset consists of 37000 cloze-style (fill-in-the-gap) queries constructed from definitions of software entity tags on the popular website Stack Overflow. The posts and comments on the website serve as the background corpus for answering the cloze questions. The Quasar-T dataset consists of 43000 open-domain trivia questions and their answers obtained from various internet sources. ClueWeb09 serves as the background corpus for extracting these answers. We pose these datasets as a challenge for two related subtasks of factoid Question Answering: (1) searching for relevant pieces of text that include the correct answer to a query, and (2) reading the retrieved text to answer the query. We also describe a retrieval system for extracting relevant sentences and documents from the corpus given a query, and include these in the release for researchers wishing to only focus on (2). We evaluate several baselines on both datasets, ranging from simple heuristics to powerful neural models, and show that these lag behind human performance by 16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at https://github.com/bdhingra/quasar . | {
"section_name": [
"Introduction",
"Dataset Construction",
"Question sets",
"Context Retrieval",
"Candidate solutions",
"Postprocessing",
"Metrics",
"Human Evaluation",
"Baseline Systems",
"Results",
"Conclusion",
"Acknowledgments",
"Quasar-S Relation Definitions",
"Performance Analysis"
],
"paragraphs": [
[
"Factoid Question Answering (QA) aims to extract answers, from an underlying knowledge source, to information seeking questions posed in natural language. Depending on the knowledge source available there are two main approaches for factoid QA. Structured sources, including Knowledge Bases (KBs) such as Freebase BIBREF1 , are easier to process automatically since the information is organized according to a fixed schema. In this case the question is parsed into a logical form in order to query against the KB. However, even the largest KBs are often incomplete BIBREF2 , BIBREF3 , and hence can only answer a limited subset of all possible factoid questions.",
"For this reason the focus is now shifting towards unstructured sources, such as Wikipedia articles, which hold a vast quantity of information in textual form and, in principle, can be used to answer a much larger collection of questions. Extracting the correct answer from unstructured text is, however, challenging, and typical QA pipelines consist of the following two components: (1) searching for the passages relevant to the given question, and (2) reading the retrieved text in order to select a span of text which best answers the question BIBREF4 , BIBREF5 .",
"Like most other language technologies, the current research focus for both these steps is firmly on machine learning based approaches for which performance improves with the amount of data available. Machine reading performance, in particular, has been significantly boosted in the last few years with the introduction of large-scale reading comprehension datasets such as CNN / DailyMail BIBREF6 and Squad BIBREF7 . State-of-the-art systems for these datasets BIBREF8 , BIBREF9 focus solely on step (2) above, in effect assuming the relevant passage of text is already known.",
"In this paper, we introduce two new datasets for QUestion Answering by Search And Reading – Quasar. The datasets each consist of factoid question-answer pairs and a corresponding large background corpus to facilitate research into the combined problem of retrieval and comprehension. Quasar-S consists of 37,362 cloze-style questions constructed from definitions of software entities available on the popular website Stack Overflow. The answer to each question is restricted to be another software entity, from an output vocabulary of 4874 entities. Quasar-T consists of 43,013 trivia questions collected from various internet sources by a trivia enthusiast. The answers to these questions are free-form spans of text, though most are noun phrases.",
"While production quality QA systems may have access to the entire world wide web as a knowledge source, for Quasar we restrict our search to specific background corpora. This is necessary to avoid uninteresting solutions which directly extract answers from the sources from which the questions were constructed. For Quasar-S we construct the knowledge source by collecting top 50 threads tagged with each entity in the dataset on the Stack Overflow website. For Quasar-T we use ClueWeb09 BIBREF0 , which contains about 1 billion web pages collected between January and February 2009. Figure 1 shows some examples.",
"Unlike existing reading comprehension tasks, the Quasar tasks go beyond the ability to only understand a given passage, and require the ability to answer questions given large corpora. Prior datasets (such as those used in BIBREF4 ) are constructed by first selecting a passage and then constructing questions about that passage. This design (intentionally) ignores some of the subproblems required to answer open-domain questions from corpora, namely searching for passages that may contain candidate answers, and aggregating information/resolving conflicts between candidates from many passages. The purpose of Quasar is to allow research into these subproblems, and in particular whether the search step can benefit from integration and joint training with downstream reading systems.",
"Additionally, Quasar-S has the interesting feature of being a closed-domain dataset about computer programming, and successful approaches to it must develop domain-expertise and a deep understanding of the background corpus. To our knowledge it is one of the largest closed-domain QA datasets available. Quasar-T, on the other hand, consists of open-domain questions based on trivia, which refers to “bits of information, often of little importance\". Unlike previous open-domain systems which rely heavily on the redundancy of information on the web to correctly answer questions, we hypothesize that Quasar-T requires a deeper reading of documents to answer correctly.",
"We evaluate Quasar against human testers, as well as several baselines ranging from naïve heuristics to state-of-the-art machine readers. The best performing baselines achieve $33.6\\%$ and $28.5\\%$ on Quasar-S and Quasar-T, while human performance is $50\\%$ and $60.6\\%$ respectively. For the automatic systems, we see an interesting tension between searching and reading accuracies – retrieving more documents in the search phase leads to a higher coverage of answers, but makes the comprehension task more difficult. We also collect annotations on a subset of the development set questions to allow researchers to analyze the categories in which their system performs well or falls short. We plan to release these annotations along with the datasets, and our retrieved documents for each question."
],
[
"Each dataset consists of a collection of records with one QA problem per record. For each record, we include some question text, a context document relevant to the question, a set of candidate solutions, and the correct solution. In this section, we describe how each of these fields was generated for each Quasar variant."
],
[
"The software question set was built from the definitional “excerpt” entry for each tag (entity) on StackOverflow. For example the excerpt for the “java“ tag is, “Java is a general-purpose object-oriented programming language designed to be used in conjunction with the Java Virtual Machine (JVM).” Not every excerpt includes the tag being defined (which we will call the “head tag”), so we prepend the head tag to the front of the string to guarantee relevant results later on in the pipeline. We then completed preprocessing of the software questions by downcasing and tokenizing the string using a custom tokenizer compatible with special characters in software terms (e.g. “.net”, “c++”). Each preprocessed excerpt was then converted to a series of cloze questions using a simple heuristic: first searching the string for mentions of other entities, then repleacing each mention in turn with a placeholder string (Figure 2 ).",
"This heuristic is noisy, since the software domain often overloads existing English words (e.g. “can” may refer to a Controller Area Network bus; “swap” may refer to the temporary storage of inactive pages of memory on disk; “using” may refer to a namespacing keyword). To improve precision we scored each cloze based on the relative incidence of the term in an English corpus versus in our StackOverflow one, and discarded all clozes scoring below a threshold. This means our dataset does not include any cloze questions for terms which are common in English (such as “can” “swap” and “using”, but also “image” “service” and “packet”). A more sophisticated entity recognition system could make recall improvements here.",
"The trivia question set was built from a collection of just under 54,000 trivia questions collected by Reddit user 007craft and released in December 2015. The raw dataset was noisy, having been scraped from multiple sources with variable attention to detail in formatting, spelling, and accuracy. We filtered the raw questions to remove unparseable entries as well as any True/False or multiple choice questions, for a total of 52,000 free-response style questions remaining. The questions range in difficulty, from straightforward (“Who recorded the song `Rocket Man”' “Elton John”) to difficult (“What was Robin Williams paid for Disney's Aladdin in 1982” “Scale $485 day + Picasso Painting”) to debatable (“According to Earth Medicine what's the birth totem for march” “The Falcon”)"
],
[
"The context document for each record consists of a list of ranked and scored pseudodocuments relevant to the question.",
"Context documents for each query were generated in a two-phase fashion, first collecting a large pool of semirelevant text, then filling a temporary index with short or long pseudodocuments from the pool, and finally selecting a set of $N$ top-ranking pseudodocuments (100 short or 20 long) from the temporary index.",
"For Quasar-S, the pool of text for each question was composed of 50+ question-and-answer threads scraped from http://stackoverflow.com. StackOverflow keeps a running tally of the top-voted questions for each tag in their knowledge base; we used Scrapy to pull the top 50 question posts for each tag, along with any answer-post responses and metadata (tags, authorship, comments). From each thread we pulled all text not marked as code, and split it into sentences using the Stanford NLP sentence segmenter, truncating sentences to 2048 characters. Each sentence was marked with a thread identifier, a post identifier, and the tags for the thread. Long pseudodocuments were either the full post (in the case of question posts), or the full post and its head question (in the case of answer posts), comments included. Short pseudodocuments were individual sentences.",
"To build the context documents for Quasar-S, the pseudodocuments for the entire corpus were loaded into a disk-based lucene index, each annotated with its thread ID and the tags for the thread. This index was queried for each cloze using the following lucene syntax:",
"[noitemsep] ",
"SHOULD(PHRASE(question text))",
"SHOULD(BOOLEAN(question text))",
"MUST(tags:$headtag)",
"where “question text” refers to the sequence of tokens in the cloze question, with the placeholder removed. The first SHOULD term indicates that an exact phrase match to the question text should score highly. The second SHOULD term indicates that any partial match to tokens in the question text should also score highly, roughly in proportion to the number of terms matched. The MUST term indicates that only pseudodocuments annotated with the head tag of the cloze should be considered.",
"The top $100N$ pseudodocuments were retrieved, and the top $N$ unique pseudodocuments were added to the context document along with their lucene retrieval score. Any questions showing zero results for this query were discarded.",
"For Quasar-T, the pool of text for each question was composed of 100 HTML documents retrieved from ClueWeb09. Each question-answer pair was converted to a #combine query in the Indri query language to comply with the ClueWeb09 batch query service, using simple regular expression substitution rules to remove (s/[.(){}<>:*`_]+//g) or replace (s/[,?']+/ /g) illegal characters. Any questions generating syntax errors after this step were discarded. We then extracted the plaintext from each HTML document using Jericho. For long pseudodocuments we used the full page text, truncated to 2048 characters. For short pseudodocuments we used individual sentences as extracted by the Stanford NLP sentence segmenter, truncated to 200 characters.",
"To build the context documents for the trivia set, the pseudodocuments from the pool were collected into an in-memory lucene index and queried using the question text only (the answer text was not included for this step). The structure of the query was identical to the query for Quasar-S, without the head tag filter:",
"[noitemsep] ",
"SHOULD(PHRASE(question text))",
"SHOULD(BOOLEAN(question text))",
"The top $100N$ pseudodocuments were retrieved, and the top $N$ unique pseudodocuments were added to the context document along with their lucene retrieval score. Any questions showing zero results for this query were discarded."
],
[
"The list of candidate solutions provided with each record is guaranteed to contain the correct answer to the question. Quasar-S used a closed vocabulary of 4874 tags as its candidate list. Since the questions in Quasar-T are in free-response format, we constructed a separate list of candidate solutions for each question. Since most of the correct answers were noun phrases, we took each sequence of NN* -tagged tokens in the context document, as identified by the Stanford NLP Maxent POS tagger, as the candidate list for each record. If this list did not include the correct answer, it was added to the list."
],
[
"Once context documents had been built, we extracted the subset of questions where the answer string, excluded from the query for the two-phase search, was nonetheless present in the context document. This subset allows us to evaluate the performance of the reading system independently from the search system, while the full set allows us to evaluate the performance of Quasar as a whole. We also split the full set into training, validation and test sets. The final size of each data subset after all discards is listed in Table 1 ."
],
[
"Evaluation is straightforward on Quasar-S since each answer comes from a fixed output vocabulary of entities, and we report the average accuracy of predictions as the evaluation metric. For Quasar-T, the answers may be free form spans of text, and the same answer may be expressed in different terms, which makes evaluation difficult. Here we pick the two metrics from BIBREF7 , BIBREF19 . In preprocessing the answer we remove punctuation, white-space and definite and indefinite articles from the strings. Then, exact match measures whether the two strings, after preprocessing, are equal or not. For F1 match we first construct a bag of tokens for each string, followed be preprocessing of each token, and measure the F1 score of the overlap between the two bags of tokens. These metrics are far from perfect for Quasar-T; for example, our human testers were penalized for entering “0” as answer instead of “zero”. However, a comparison between systems may still be meaningful."
],
[
"To put the difficulty of the introduced datasets into perspective, we evaluated human performance on answering the questions. For each dataset, we recruited one domain expert (a developer with several years of programming experience for Quasar-S, and an avid trivia enthusiast for Quasar-T) and $1-3$ non-experts. Each volunteer was presented with randomly selected questions from the development set and asked to answer them via an online app. The experts were evaluated in a “closed-book” setting, i.e. they did not have access to any external resources. The non-experts were evaluated in an “open-book” setting, where they had access to a search engine over the short pseudo-documents extracted for each dataset (as described in Section \"Context Retrieval\" ). We decided to use short pseudo-documents for this exercise to reduce the burden of reading on the volunteers, though we note that the long pseudo-documents have greater coverage of answers.",
"We also asked the volunteers to provide annotations to categorize the type of each question they were asked, and a label for whether the question was ambiguous. For Quasar-S the annotators were asked to mark the relation between the head entity (from whose definition the cloze was constructed) and the answer entity. For Quasar-T the annotators were asked to mark the genre of the question (e.g., Arts & Literature) and the entity type of the answer (e.g., Person). When multiple annotators marked the same question differently, we took the majority vote when possible and discarded ties. In total we collected 226 relation annotations for 136 questions in Quasar-S, out of which 27 were discarded due to conflicting ties, leaving a total of 109 annotated questions. For Quasar-T we collected annotations for a total of 144 questions, out of which 12 we marked as ambiguous. In the remaining 132, a total of 214 genres were annotated (a question could be annotated with multiple genres), while 10 questions had conflicting entity-type annotations which we discarded, leaving 122 total entity-type annotations. Figure 3 shows the distribution of these annotations."
],
[
"We evaluate several baselines on Quasar, ranging from simple heuristics to deep neural networks. Some predict a single token / entity as the answer, while others predict a span of tokens.",
"MF-i (Maximum Frequency) counts the number of occurrences of each candidate answer in the retrieved context and returns the one with maximum frequency. MF-e is the same as MF-i except it excludes the candidates present in the query. WD (Word Distance) measures the sum of distances from a candidate to other non-stopword tokens in the passage which are also present in the query. For the cloze-style Quasar-S the distances are measured by first aligning the query placeholder to the candidate in the passage, and then measuring the offsets between other tokens in the query and their mentions in the passage. The maximum distance for any token is capped at a specified threshold, which is tuned on the validation set.",
"For Quasar-T we also test the Sliding Window (SW) and Sliding Window + Distance (SW+D) baselines proposed in BIBREF13 . The scores were computed for the list of candidate solutions described in Section \"Context Retrieval\" .",
"For Quasar-S, since the answers come from a fixed vocabulary of entities, we test language model baselines which predict the most likely entity to appear in a given context. We train three n-gram baselines using the SRILM toolkit BIBREF21 for $n=3,4,5$ on the entire corpus of all Stack Overflow posts. The output predictions are restricted to the output vocabulary of entities.",
"We also train a bidirectional Recurrent Neural Network (RNN) language model (based on GRU units). This model encodes both the left and right context of an entity using forward and backward GRUs, and then concatenates the final states from both to predict the entity through a softmax layer. Training is performed on the entire corpus of Stack Overflow posts, with the loss computed only over mentions of entities in the output vocabulary. This approach benefits from looking at both sides of the cloze in a query to predict the entity, as compared to the single-sided n-gram baselines.",
"Reading comprehension models are trained to extract the answer from the given passage. We test two recent architectures on Quasar using publicly available code from the authors .",
"The GA Reader BIBREF8 is a multi-layer neural network which extracts a single token from the passage to answer a given query. At the time of writing it had state-of-the-art performance on several cloze-style datasets for QA. For Quasar-S we train and test GA on all instances for which the correct answer is found within the retrieved context. For Quasar-T we train and test GA on all instances where the answer is in the context and is a single token.",
"The BiDAF model BIBREF9 is also a multi-layer neural network which predicts a span of text from the passage as the answer to a given query. At the time of writing it had state-of-the-art performance among published models on the Squad dataset. For Quasar-T we train and test BiDAF on all instances where the answer is in the retrieved context."
],
[
"Several baselines rely on the retrieved context to extract the answer to a question. For these, we refer to the fraction of instances for which the correct answer is present in the context as Search Accuracy. The performance of the baseline among these instances is referred to as the Reading Accuracy, and the overall performance (which is a product of the two) is referred to as the Overall Accuracy. In Figure 4 we compare how these three vary as the number of context documents is varied. Naturally, the search accuracy increases as the context size increases, however at the same time reading performance decreases since the task of extracting the answer becomes harder for longer documents. Hence, simply retrieving more documents is not sufficient – finding the few most relevant ones will allow the reader to work best.",
"In Tables 2 and 3 we compare all baselines when the context size is tuned to maximize the overall accuracy on the validation set. For Quasar-S the best performing baseline is the BiRNN language model, which achieves $33.6\\%$ accuracy. The GA model achieves $48.3\\%$ accuracy on the set of instances for which the answer is in context, however, a search accuracy of only $65\\%$ means its overall performance is lower. This can improve with improved retrieval. For Quasar-T, both the neural models significantly outperform the heuristic models, with BiDAF getting the highest F1 score of $28.5\\%$ .",
"The best performing baselines, however, lag behind human performance by $16.4\\%$ and $32.1\\%$ for Quasar-S and Quasar-T respectively, indicating the strong potential for improvement. Interestingly, for human performance we observe that non-experts are able to match or beat the performance of experts when given access to the background corpus for searching the answers. We also emphasize that the human performance is limited by either the knowledge of the experts, or the usefulness of the search engine for non-experts; it should not be viewed as an upper bound for automatic systems which can potentially use the entire background corpus. Further analysis of the human and baseline performance in each category of annotated questions is provided in Appendix \"Performance Analysis\" ."
],
[
"We have presented the Quasar datasets for promoting research into two related tasks for QA – searching a large corpus of text for relevant passages, and reading the passages to extract answers. We have also described baseline systems for the two tasks which perform reasonably but lag behind human performance. While the searching performance improves as we retrieve more context, the reading performance typically goes down. Hence, future work, in addition to improving these components individually, should also focus on joint approaches to optimizing the two on end-task performance. The datasets, including the documents retrieved by our system and the human annotations, are available at https://github.com/bdhingra/quasar."
],
[
"This work was funded by NSF under grants CCF-1414030 and IIS-1250956 and by grants from Google."
],
[
"Table 4 includes the definition of all the annotated relations for Quasar-S."
],
[
"Figure 5 shows a comparison of the human performance with the best performing baseline for each category of annotated questions. We see consistent differences between the two, except in the following cases. For Quasar-S, Bi-RNN performs comparably to humans for the developed-with and runs-on categories, but much worse in the has-component and is-a categories. For Quasar-T, BiDAF performs comparably to humans in the sports category, but much worse in history & religion and language, or when the answer type is a number or date/time."
]
]
} | {
"question": [
"Which retrieval system was used for baselines?"
],
"question_id": [
"dcb18516369c3cf9838e83168357aed6643ae1b8"
],
"nlp_background": [
"five"
],
"topic_background": [
"familiar"
],
"paper_read": [
"somewhat"
],
"search_query": [
"question"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "The dataset comes with a ranked set of relevant documents. Hence the baselines do not use a retrieval system.",
"evidence": [
"Each dataset consists of a collection of records with one QA problem per record. For each record, we include some question text, a context document relevant to the question, a set of candidate solutions, and the correct solution. In this section, we describe how each of these fields was generated for each Quasar variant.",
"The context document for each record consists of a list of ranked and scored pseudodocuments relevant to the question.",
"Several baselines rely on the retrieved context to extract the answer to a question. For these, we refer to the fraction of instances for which the correct answer is present in the context as Search Accuracy. The performance of the baseline among these instances is referred to as the Reading Accuracy, and the overall performance (which is a product of the two) is referred to as the Overall Accuracy. In Figure 4 we compare how these three vary as the number of context documents is varied. Naturally, the search accuracy increases as the context size increases, however at the same time reading performance decreases since the task of extracting the answer becomes harder for longer documents. Hence, simply retrieving more documents is not sufficient – finding the few most relevant ones will allow the reader to work best."
],
"highlighted_evidence": [
"Each dataset consists of a collection of records with one QA problem per record. For each record, we include some question text, a context document relevant to the question, a set of candidate solutions, and the correct solution.",
"The context document for each record consists of a list of ranked and scored pseudodocuments relevant to the question.",
"Several baselines rely on the retrieved context to extract the answer to a question. For these, we refer to the fraction of instances for which the correct answer is present in the context as Search Accuracy.",
"Naturally, the search accuracy increases as the context size increases, however at the same time reading performance decreases since the task of extracting the answer becomes harder for longer documents."
]
}
],
"annotation_id": [
"00112b6bc9f87e8d1943add164637a03ebc74336"
],
"worker_id": [
"1d87720d0db14aa36d083b7dc3999984c4489389"
]
}
]
} | {
"caption": [
"Figure 1: Example short-document instances from QUASAR-S (top) and QUASAR-T (bottom)",
"Figure 2: Cloze generation",
"Table 1: Dataset Statistics. Single-Token refers to the questions whose answer is a single token (for QUASAR-S all answers come from a fixed vocabulary). Answer in Short (Long) indicates whether the answer is present in the retrieved short (long) pseudo-documents.",
"Figure 3: Distribution of manual annotations for QUASAR. Description of the QUASAR-S annotations is in Appendix A.",
"Figure 4: Variation of Search, Read and Overall accuracies as the number of context documents is varied.",
"Table 2: Performance comparison on QUASAR-S. CB: Closed-Book, OB: Open Book. Neural baselines are denoted with †. Optimal context is the number of documents used for answer extraction, which was tuned to maximize the overall accuracy on validation set.",
"Table 3: Performance comparison on QUASAR-T. CB: Closed-Book, OB: Open Book. Neural baselines are denoted with †. Optimal context is the number of documents used for answer extraction, which was tuned to maximize the overall accuracy on validation set.**We were unable to run BiDAF with more than 10 short-documents / 1 long-documents, and GA with more than 10 long-documents due to memory errors.",
"Table 4: Description of the annotated relations between the head entity, from whose definition the cloze is constructed, and the answer entity which fills in the cloze. These are the same as the descriptions shown to the annotators.",
"Figure 5: Performance comparison of humans and the best performing baseline across the categories annotated for the development set."
],
"file": [
"2-Figure1-1.png",
"5-Figure2-1.png",
"6-Table1-1.png",
"7-Figure3-1.png",
"8-Figure4-1.png",
"9-Table2-1.png",
"9-Table3-1.png",
"11-Table4-1.png",
"11-Figure5-1.png"
]
} |
1911.07228 | Error Analysis for Vietnamese Named Entity Recognition on Deep Neural Network Models | In recent years, Vietnamese Named Entity Recognition (NER) systems have had a great breakthrough when using Deep Neural Network methods. This paper describes the primary errors of the state-of-the-art NER systems on Vietnamese language. After conducting experiments on BLSTM-CNN-CRF and BLSTM-CRF models with different word embeddings on the Vietnamese NER dataset. This dataset is provided by VLSP in 2016 and used to evaluate most of the current Vietnamese NER systems. We noticed that BLSTM-CNN-CRF gives better results, therefore, we analyze the errors on this model in detail. Our error-analysis results provide us thorough insights in order to increase the performance of NER for the Vietnamese language and improve the quality of the corpus in the future works. | {
"section_name": [
"Introduction",
"Related work",
"Error-analysis method",
"Data and model ::: Data sets",
"Data and model ::: Pre-trained word Embeddings",
"Data and model ::: Model",
"Experiment and Results",
"Experiment and Results ::: Error analysis on gold data",
"Experiment and Results ::: Analysis on predicted data",
"Experiment and Results ::: Errors of annotators",
"Conclusion"
],
"paragraphs": [
[
"Named Entity Recognition (NER) is one of information extraction subtasks that is responsible for detecting entity elements from raw text and can determine the category in which the element belongs, these categories include the names of persons, organizations, locations, expressions of times, quantities, monetary values and percentages.",
"The problem of NER is described as follow:",
"Input: A sentence S consists a sequence of $n$ words: $S= w_1,w_2,w_3,…,w_n$ ($w_i$: the $i^{th}$ word)",
"Output: The sequence of $n$ labels $y_1,y_2,y_3,…,y_n$. Each $y_i$ label represents the category which $w_i$ belongs to.",
"For example, given a sentence:",
"Input: vietnamGiám đốc điều hành Tim Cook của Apple vừa giới thiệu 2 điện thoại iPhone, đồng hồ thông minh mới, lớn hơn ở sự kiện Flint Center, Cupertino.",
"(Apple CEO Tim Cook introduces 2 new, larger iPhones, Smart Watch at Cupertino Flint Center event)",
"The algorithm will output:",
"Output: vietnam⟨O⟩Giám đốc điều hành⟨O⟩ ⟨PER⟩Tim Cook⟨PER⟩ ⟨O⟩của⟨O⟩ ⟨ORG⟩Apple⟨ORG⟩ ⟨O⟩vừa giới thiệu 2 điện thoại iPhone, đồng hồ thông minh mới, lớn hơn ở sự kiện⟨O⟩ ⟨ORG⟩Flint Center⟨ORG⟩, ⟨LOC⟩Cupertino⟨LOC⟩.",
"With LOC, PER, ORG is Name of location, person, organization respectively. Note that O means Other (Not a Name entity). We will not denote the O label in the following examples in this article because we only care about name of entities.",
"In this paper, we analyze common errors of the previous state-of-the-art techniques using Deep Neural Network (DNN) on VLSP Corpus. This may contribute to the later researchers the common errors from the results of these state-of-the-art models, then they can rely on to improve the model.",
"Section 2 discusses the related works to this paper. We will present a method for evaluating and analyzing the types of errors in Section 3. The data used for testing and analysis of errors will be introduced in Section 4, we also talk about deep neural network methods and pre-trained word embeddings for experimentation in this section. Section 5 will detail the errors and evaluations. In the end is our contribution to improve the above errors."
],
[
"Previously publicly available NER systems do not use DNN, for example, the MITRE Identification Scrubber Toolkit (MIST) BIBREF0, Stanford NER BIBREF1, BANNER BIBREF2 and NERsuite BIBREF3. NER systems for Vietnamese language processing used traditional machine learning methods such as Maximum Entropy Markov Model (MEMM), Support Vector Machine (SVM) and Conditional Random Field (CRF). In particular, most of the toolkits for NER task attempted to use MEMM BIBREF4, and CRF BIBREF5 to solve this problem.",
"Nowadays, because of the increase in data, DNN methods are used a lot. They have archived great results when it comes to NER tasks, for example, Guillaume Lample et al with BLSTM-CRF in BIBREF6 report 90.94 F1 score, Chiu et al with BLSTM-CNN in BIBREF7 got 91.62 F1 score, Xeuzhe Ma and Eduard Hovy with BLSTM-CNN-CRF in BIBREF8 achieved F1 score of 91.21, Thai-Hoang Pham and Phuong Le-Hong with BLSTM-CNN-CRF in BIBREF9 got 88.59% F1 score. These DNN models are also the state-of-the-art models."
],
[
"The results of our analysis experiments are reported in precision and recall over all labels (name of person, location, organization and miscellaneous). The process of analyzing errors has 2 steps:",
"Step 1: We use two state-of-the-art models including BLSTM-CNN-CRF and BLSTM-CRF to train and test on VLSP’s NER corpus. In our experiments, we implement word embeddings as features to the two systems.",
"Step 2: Based on the best results (BLSTM-CNN-CRF), error analysis is performed based on five types of errors (No extraction, No annotation, Wrong range, Wrong tag, Wrong range and tag), in a way similar to BIBREF10, but we analyze on both gold labels and predicted labels (more detail in figure 1 and 2).",
"A token (an entity name maybe contain more than one word) will be extracted as a correct entity by the model if both of the followings are correct:",
"The length of it (range) is correct: The word beginning and the end is the same as gold data (annotator).",
"The label (tag) of it is correct: The label is the same as in gold data.",
"If it is not meet two above requirements, it will be the wrong entity (an error). Therefore, we divide the errors into five different types which are described in detail as follows:",
"No extraction: The error where the model did not extract tokens as a name entity (NE) though the tokens were annotated as a NE.",
"LSTM-CNN-CRF: vietnam Việt_Nam",
"Annotator: vietnam⟨LOC⟩ Việt_Nam ⟨LOC⟩",
"No annotation: The error where the model extracted tokens as an NE though the tokens were not annotated as a NE.",
"LSTM-CNN-CRF: vietnam⟨PER⟩ Châu Âu ⟨PER⟩",
"Annotator: vietnamChâu Âu",
"Wrong range: The error where the model extracted tokens as an NE and only the range was wrong. (The extracted tokens were partially annotated or they were the part of the annotated tokens).",
"LSTM-CNN-CRF: vietnam⟨PER⟩ Ca_sĩ Nguyễn Văn A ⟨PER⟩",
"Annotator:",
"vietnamCa_sĩ ⟨PER⟩ Nguyễn Văn A ⟨PER⟩",
"Wrong tag: The error where the model extracted tokens as an NE and only the tag type was wrong.",
"LSTM-CNN-CRF: vietnamKhám phá ⟨PER⟩ Yangsuri ⟨PER⟩",
"Annotator:",
"vietnamKhám phá ⟨LOC⟩ Yangsuri ⟨LOC⟩",
"Wrong range and tag: The error where the model extracted tokens as an NE and both the range and the tag type were wrong.",
"LSTM-CNN-CRF: vietnam⟨LOC⟩ gian_hàng Apple ⟨LOC⟩",
"Annotator:",
"vietnamgian_hàng ⟨ORG⟩ Apple ⟨ORG⟩",
"We compare the predicted NEs to the gold NEs ($Fig. 1$), if they have the same range, the predicted NE is a correct or Wrong tag. If it has different range with the gold NE, we will see what type of wrong it is. If it does not have any overlap, it is a No extraction. If it has an overlap and the tag is the same at gold NE, it is a Wrong range. Finally, it is a Wrong range and tag if it has an overlap but the tag is different. The steps in Fig. 2 is the same at Fig. 1 and the different only is we compare the gold NE to the predicted NE, and No extraction type will be No annotation."
],
[
"To conduct error analysis of the model, we used the corpus which are provided by VLSP 2016 - Named Entity Recognition. The dataset contains four different types of label: Location (LOC), Person (PER), Organization (ORG) and Miscellaneous - Name of an entity that do not belong to 3 types above (Table TABREF15). Although the corpus has more information about the POS and chunks, but we do not use them as features in our model.",
"There are two folders with 267 text files of training data and 45 text files of test data. They all have their own format. We take 21 first text files and 22 last text files and 22 sentences of the 22th text file and 55 sentences of the 245th text file to be a development data. The remaining files are going to be the training data. The test file is the same at the file VSLP gave. Finally, we have 3 text files only based on the CoNLL 2003 format: train, dev and test."
],
[
"We use the word embeddings for Vietnamese that created by Kyubyong Park and Edouard Grave at al:",
"Kyubyong Park: In his project, he uses two methods including fastText and word2vec to generate word embeddings from wikipedia database backup dumps. His word embedding is the vector of 100 dimension and it has about 10k words.",
"Edouard Grave et al BIBREF11: They use fastText tool to generate word embeddings from Wikipedia. The format is the same at Kyubyong's, but their embedding is the vector of 300 dimension, and they have about 200k words"
],
[
"Based on state-of-the-art methods for NER, BLSTM-CNN-CRF is the end-to-end deep neural network model that achieves the best result on F-score BIBREF9. Therefore, we decide to conduct the experiment on this model and analyze the errors.",
"We run experiment with the Ma and Hovy (2016) model BIBREF8, source code provided by (Motoki Sato) and analysis the errors from this result. Before we decide to analysis on this result, we have run some other methods, but this one with Vietnamese pre-trained word embeddings provided by Kyubyong Park obtains the best result. Other results are shown in the Table 2."
],
[
"Table 2 shows our experiments on two models with and without different pre-trained word embedding – KP means the Kyubyong Park’s pre-trained word embeddings and EG means Edouard Grave’s pre-trained word embeddings.",
"We compare the outputs of BLSTM-CNN-CRF model (predicted) to the annotated data (gold) and analyzed the errors. Table 3 shows perfomance of the BLSTM-CNN-CRF model. In our experiments, we use three evaluation parameters (precision, recall, and F1 score) to access our experimental result. They will be described as follow in Table 3. The \"correctNE\", the number of correct label for entity that the model can found. The \"goldNE\", number of the real label annotated by annotator in the gold data. The \"foundNE\", number of the label the model find out (no matter if they are correct or not).",
"In Table 3 above, we can see that recall score on ORG label is lowest. The reason is almost all the ORG label on test file is name of some brands that do not appear on training data and pre-trained word embedding. On the other side, the characters inside these brand names also inside the other names of person in the training data. The context from both side of the sentence (future- and past-feature) also make the model \"think\" the name entity not as it should be.",
"Table 4 shows that the biggest number of errors is No extraction. The errors were counted by using logical sum (OR) of the gold labels and predicted labels (predicted by the model). The second most frequent error was Wrong tag means the model extract it's a NE but wrong tag."
],
[
"First of all, we will compare the predicted NEs to the gold NEs (Fig. 1). Table 4 shows the summary of errors by types based on the gold labels, the \"correct\" is the number of gold tag that the model predicted correctly, \"error\" is the number of gold tag that the model predicted incorrectly, and \"total\" is sum of them. Four columns next show the number of type errors on each label.",
"Table 5 shows that Person, Location and Organization is the main reason why No extraction and Wrong tag are high.",
"After analyzing based on the gold NEs, we figure out the reason is:",
"Almost all the NEs is wrong, they do not appear on training data and pre-trained embedding. These NEs vector will be initial randomly, therefore, these vectors are poor which means have no semantic aspect.",
"The \"weird\" ORG NE in the sentence appear together with other words have context of PER, so this \"weird\" ORG NE is going to be label at PER.",
"For example:",
"gold data: vietnamVĐV được xem là đầu_tiên ký hợp_đồng quảng_cáo là võ_sĩ ⟨PER⟩ Trần Quang Hạ ⟨PER⟩ sau khi đoạt HCV taekwondo Asiad ⟨LOC⟩ Hiroshima ⟨LOC⟩.",
"(The athlete is considered the first to sign a contract of boxing Tran Quang Ha after winning the gold medal Asiad Hiroshima)",
"predicted data: vietnam…là võ_sĩ ⟨PER⟩Trần Quang Hạ⟨PER⟩ sau khi đoạt HCV taekwondo Asiad ⟨PER⟩Hiroshima⟨PER⟩.",
"Some mistakes of the model are from training set, for example, anonymous person named \"P.\" appears many times in the training set, so when model meets \"P.\" in context of \"P. 3 vietnamQuận 9\" (Ward 3, District 9) – \"P.\" stands for vietnam\"Phường\" (Ward) model will predict \"P.\" as a PER.",
"Training data: vietnamnếu ⟨PER⟩P.⟨PER⟩ có ở đây – (If P. were here) Predicted data: vietnam⟨PER⟩P. 3⟨PER⟩, Gò_vấp – (Ward 3, Go_vap District)"
],
[
"Table 6 shows the summary of errors by types based on the predicted data. After analyzing the errors on predicted and gold data, we noticed that the difference of these errors are mainly in the No anotation and No extraction. Therefore, we only mention the main reasons for the No anotation:",
"Most of the wrong labels that model assigns are brand names (Ex: Charriol, Dream, Jupiter, ...), words are abbreviated vietnam(XKLD – xuất khẩu lao động (labour export)), movie names, … All of these words do not appear in training data and word embedding. Perhaps these reasons are the followings:",
"The vectors of these words are random so the semantic aspect is poor.",
"The hidden states of these words also rely on past feature (forward pass) and future feature (backward pass) of the sentence. Therefore, they are assigned wrongly because of their context.",
"These words are primarily capitalized or all capital letters, so they are assigned as a name entity. This error is caused by the CNN layer extract characters information of the word.",
"Table 7 shows the detail of errors on predicted data where we will see number kind of errors on each label."
],
[
"After considering the training and test data, we realized that this data has many problems need to be fixed in the next run experiments. The annotators are not consistent between the training data and the test data, more details are shown as follow:",
"The organizations are labeled in the train data but not labeled in the test data:",
"Training data: vietnam⟨ORG⟩ Sở Y_tế ⟨ORG⟩ (Department of Health)",
"Test data: vietnamSở Y_tế (Department of Health)",
"Explanation: vietnam\"Sở Y_tế\" in train and test are the same name of organization entity. However the one in test data is not labeled.",
"The entity has the same meaning but is assigned differently between the train data and the test:",
"Training data: vietnam⟨MISC⟩ người Việt ⟨MISC⟩ (Vietnamese people)",
"Test data: vietnamdân ⟨LOC⟩ Việt ⟨LOC⟩ (Vietnamese people)",
"Explanation: vietnamBoth \"người Việt\" in train data and \"dân Việt\" in test data are the same meaning, but they are assigned differently.",
"The range of entities are differently between the train data and the test data:",
"Training data: vietnam⟨LOC⟩ làng Atâu ⟨LOC⟩ (Atâu village)",
"Test data: vietnamlàng ⟨LOC⟩ Hàn_Quốc ⟨LOC⟩ (Korea village)",
"Explanation: The two villages differ only in name, but they are labeled differently in range",
"Capitalization rules are not unified with a token is considered an entity:",
"Training data: vietnam⟨ORG⟩ Công_ty Inmasco ⟨ORG⟩ (Inmasco Company)",
"Training data: vietnamcông_ty con (Subsidiaries)",
"Test data: vietnamcông_ty ⟨ORG⟩ Yeon Young Entertainment ⟨ORG⟩ (Yeon Young Entertainment company)",
"Explanation: If it comes to a company with a specific name, it should be labeled vietnam⟨ORG⟩ Công_ty Yeon Young Entertainment ⟨ORG⟩ with \"C\" in capital letters."
],
[
"In this paper, we have presented a thorough study of distinctive error distributions produced by Bi-LSTM-CNN-CRF for the Vietnamese language. This would be helpful for researchers to create better NER models.",
"Based on the analysis results, we suggest some possible directions for improvement of model and for the improvement of data-driven NER for the Vietnamese language in future:",
"The word at the begin of the sentence is capitalized, so, if the name of person is at this position, model will ignore them (no extraction). To improve this issue, we can use the POS feature together with BIO format (Inside, Outside, Beginning) BIBREF6 at the top layer (CRF).",
"If we can unify the labeling of the annotators between the train, dev and test sets. We will improve data quality and classifier.",
"It is better if there is a pre-trained word embeddings that overlays the data, and segmentation algorithm need to be more accurately."
]
]
} | {
"question": [
"What word embeddings were used?",
"What type of errors were produced by the BLSTM-CNN-CRF system?",
"How much better was the BLSTM-CNN-CRF than the BLSTM-CRF?"
],
"question_id": [
"f46a907360d75ad566620e7f6bf7746497b6e4a9",
"79d999bdf8a343ce5b2739db3833661a1deab742",
"71d59c36225b5ee80af11d3568bdad7425f17b0c"
],
"nlp_background": [
"five",
"five",
"five"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Kyubyong Park",
"Edouard Grave et al BIBREF11"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We use the word embeddings for Vietnamese that created by Kyubyong Park and Edouard Grave at al:",
"Kyubyong Park: In his project, he uses two methods including fastText and word2vec to generate word embeddings from wikipedia database backup dumps. His word embedding is the vector of 100 dimension and it has about 10k words.",
"Edouard Grave et al BIBREF11: They use fastText tool to generate word embeddings from Wikipedia. The format is the same at Kyubyong's, but their embedding is the vector of 300 dimension, and they have about 200k words"
],
"highlighted_evidence": [
"We use the word embeddings for Vietnamese that created by Kyubyong Park and Edouard Grave at al:\n\nKyubyong Park: In his project, he uses two methods including fastText and word2vec to generate word embeddings from wikipedia database backup dumps.",
"Edouard Grave et al BIBREF11: They use fastText tool to generate word embeddings from Wikipedia."
]
}
],
"annotation_id": [
"20b9bd9b3d0d70cf39bfdd986a5fd5d78f702e0f"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"No extraction, No annotation, Wrong range, Wrong tag, Wrong range and tag"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Step 2: Based on the best results (BLSTM-CNN-CRF), error analysis is performed based on five types of errors (No extraction, No annotation, Wrong range, Wrong tag, Wrong range and tag), in a way similar to BIBREF10, but we analyze on both gold labels and predicted labels (more detail in figure 1 and 2)."
],
"highlighted_evidence": [
"Based on the best results (BLSTM-CNN-CRF), error analysis is performed based on five types of errors (No extraction, No annotation, Wrong range, Wrong tag, Wrong range and tag), in a way similar to BIBREF10, but we analyze on both gold labels and predicted labels (more detail in figure 1 and 2)."
]
}
],
"annotation_id": [
"005a24a2b8b811b9cdc7cafd54a4b71a9e9d480f"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Best BLSTM-CNN-CRF had F1 score 86.87 vs 86.69 of best BLSTM-CRF ",
"evidence": [
"Table 2 shows our experiments on two models with and without different pre-trained word embedding – KP means the Kyubyong Park’s pre-trained word embeddings and EG means Edouard Grave’s pre-trained word embeddings.",
"FLOAT SELECTED: Table 2. F1 score of two models with different pre-trained word embeddings"
],
"highlighted_evidence": [
"Table 2 shows our experiments on two models with and without different pre-trained word embedding – KP means the Kyubyong Park’s pre-trained word embeddings and EG means Edouard Grave’s pre-trained word embeddings.",
"FLOAT SELECTED: Table 2. F1 score of two models with different pre-trained word embeddings"
]
}
],
"annotation_id": [
"fa1cc9386772d41918c8d3a69201067dbdbf5dba"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Fig. 1. Chart flow to analyze errors based on gold labels",
"Fig. 2. Chart flow to analyze errors based on predicted labels",
"Table 1. Number type of each tags in the corpus",
"Table 2. F1 score of two models with different pre-trained word embeddings",
"Table 3. Performances of LSTM-CNN-CRF on the Vietnamese NER corpus",
"Table 4. Summary of error results on gold data",
"Table 5. Summary of detailed error results on gold data",
"Table 6. Summary of error results on predicted data",
"Table 7. Summary of detailed error results on predicted data"
],
"file": [
"4-Figure1-1.png",
"5-Figure2-1.png",
"5-Table1-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Table4-1.png",
"9-Table5-1.png",
"9-Table6-1.png",
"10-Table7-1.png"
]
} |
1603.07044 | Recurrent Neural Network Encoder with Attention for Community Question Answering | We apply a general recurrent neural network (RNN) encoder framework to community question answering (cQA) tasks. Our approach does not rely on any linguistic processing, and can be applied to different languages or domains. Further improvements are observed when we extend the RNN encoders with a neural attention mechanism that encourages reasoning over entire sequences. To deal with practical issues such as data sparsity and imbalanced labels, we apply various techniques such as transfer learning and multitask learning. Our experiments on the SemEval-2016 cQA task show 10% improvement on a MAP score compared to an information retrieval-based approach, and achieve comparable performance to a strong handcrafted feature-based method. | {
"section_name": [
"Introduction",
"Related Work",
"Method",
"LSTM Models",
"Neural Attention",
"Predicting Relationships of Object Pairs with an Attention Model",
"Modeling Question-External Comments",
"Experiments",
"Preliminary Results",
"Robust Parameter Initialization",
"Multitask Learning",
"Augmented data",
"Augmented features",
"Comparison with Other Systems",
"Analysis of Attention Mechanism",
"Short Sentences",
"Long Sentences",
"Noisy Sentence",
"Conclusion"
],
"paragraphs": [
[
"Community question answering (cQA) is a paradigm that provides forums for users to ask or answer questions on any topic with barely any restrictions. In the past decade, these websites have attracted a great number of users, and have accumulated a large collection of question-comment threads generated by these users. However, the low restriction results in a high variation in answer quality, which makes it time-consuming to search for useful information from the existing content. It would therefore be valuable to automate the procedure of ranking related questions and comments for users with a new question, or when looking for solutions from comments of an existing question.",
"Automation of cQA forums can be divided into three tasks: question-comment relevance (Task A), question-question relevance (Task B), and question-external comment relevance (Task C). One might think that classic retrieval models like language models for information retrieval BIBREF0 could solve these tasks. However, a big challenge for cQA tasks is that users are used to expressing similar meanings with different words, which creates gaps when matching questions based on common words. Other challenges include informal usage of language, highly diverse content of comments, and variation in the length of both questions and comments.",
"To overcome these issues, most previous work (e.g. SemEval 2015 BIBREF1 ) relied heavily on additional features and reasoning capabilities. In BIBREF2 , a neural attention-based model was proposed for automatically recognizing entailment relations between pairs of natural language sentences. In this study, we first modify this model for all three cQA tasks. We also extend this framework into a jointly trained model when the external resources are available, i.e. selecting an external comment when we know the question that the external comment answers (Task C).",
"Our ultimate objective is to classify relevant questions and comments without complicated handcrafted features. By applying RNN-based encoders, we avoid heavily engineered features and learn the representation automatically. In addition, an attention mechanism augments encoders with the ability to attend to past outputs directly. This becomes helpful when encoding longer sequences, since we no longer need to compress all information into a fixed-length vector representation.",
"In our view, existing annotated cQA corpora are generally too small to properly train an end-to-end neural network. To address this, we investigate transfer learning by pretraining the recurrent systems on other corpora, and also generating additional instances from existing cQA corpus."
],
[
"Earlier work of community question answering relied heavily on feature engineering, linguistic tools, and external resource. BIBREF3 and BIBREF4 utilized rich non-textual features such as answer's profile. BIBREF5 syntactically analyzed the question and extracted name entity features. BIBREF6 demonstrated a textual entailment system can enhance cQA task by casting question answering to logical entailment.",
"More recent work incorporated word vector into their feature extraction system and based on it designed different distance metric for question and answer BIBREF7 BIBREF8 . While these approaches showed effectiveness, it is difficult to generalize them to common cQA tasks since linguistic tools and external resource may be restrictive in other languages and features are highly customized for each cQA task.",
"Very recent work on answer selection also involved the use of neural networks. BIBREF9 used LSTM to construct a joint vector based on both the question and the answer and then converted it into a learning to rank problem. BIBREF10 proposed several convolutional neural network (CNN) architectures for cQA. Our method differs in that RNN encoder is applied here and by adding attention mechanism we jointly learn which words in question to focus and hence available to conduct qualitative analysis. During classification, we feed the extracted vector into a feed-forward neural network directly instead of using mean/max pooling on top of each time steps."
],
[
"In this section, we first discuss long short-term memory (LSTM) units and an associated attention mechanism. Next, we explain how we can encode a pair of sentences into a dense vector for predicting relationships using an LSTM with an attention mechanism. Finally, we apply these models to predict question-question similarity, question-comment similarity, and question-external comment similarity."
],
[
"LSTMs have shown great success in many different fields. An LSTM unit contains a memory cell with self-connections, as well as three multiplicative gates to control information flow. Given input vector $x_t$ , previous hidden outputs $h_{t-1}$ , and previous cell state $c_{t-1}$ , LSTM units operate as follows: ",
"$$X &= \\begin{bmatrix}\nx_t\\\\[0.3em]\nh_{t-1}\\\\[0.3em]\n\\end{bmatrix}\\\\\ni_t &= \\sigma (\\mathbf {W_{iX}}X + \\mathbf {W_{ic}}c_{t-1} + \\mathbf {b_i})\\\\\nf_t &= \\sigma (\\mathbf {W_{fX}}X + \\mathbf {W_{fc}}c_{t-1} + \\mathbf {b_f})\\\\\no_t &= \\sigma (\\mathbf {W_{oX}}X + \\mathbf {W_{oc}}c_{t-1} + \\mathbf {b_o})\\\\\nc_t &= f_t \\odot c_{t-1} + i_t \\odot tanh(\\mathbf {W_{cX}}X + \\mathbf {b_c})\\\\\nh_t &= o_t \\odot tanh(c_t)$$ (Eq. 3) ",
"where $i_t$ , $f_t$ , $o_t$ are input, forget, and output gates, respectively. The sigmoid function $\\sigma ()$ is a soft gate function controlling the amount of information flow. $W$ s and $b$ s are model parameters to learn."
],
[
"A traditional RNN encoder-decoder approach BIBREF11 first encodes an arbitrary length input sequence into a fixed-length dense vector that can be used as input to subsequent classification models, or to initialize the hidden state of a secondary decoder. However, the requirement to compress all necessary information into a single fixed length vector can be problematic. A neural attention model BIBREF12 BIBREF13 has been recently proposed to alleviate this issue by enabling the network to attend to past outputs when decoding. Thus, the encoder no longer needs to represent an entire sequence with one vector; instead, it encodes information into a sequence of vectors, and adaptively chooses a subset of the vectors when decoding."
],
[
"In our cQA tasks, the pair of objects are (question, question) or (question, comment), and the relationship is relevant/irrelevant. The left side of Figure 1 shows one intuitive way to predict relationships using RNNs. Parallel LSTMs encode two objects independently, and then concatenate their outputs as an input to a feed-forward neural network (FNN) with a softmax output layer for classification.",
"The representations of the two objects are generated independently in this manner. However, we are more interested in the relationship instead of the object representations themselves. Therefore, we consider a serialized LSTM-encoder model in the right side of Figure 1 that is similar to that in BIBREF2 , but also allows an augmented feature input to the FNN classifier.",
"Figure 2 illustrates our attention framework in more detail. The first LSTM reads one object, and passes information through hidden units to the second LSTM. The second LSTM then reads the other object and generates the representation of this pair after the entire sequence is processed. We build another FNN that takes this representation as input to classify the relationship of this pair.",
"By adding an attention mechanism to the encoder, we allow the second LSTM to attend to the sequence of output vectors from the first LSTM, and hence generate a weighted representation of first object according to both objects. Let $h_N$ be the last output of second LSTM and $M = [h_1, h_2, \\cdots , h_L]$ be the sequence of output vectors of the first object. The weighted representation of the first object is ",
"$$h^{\\prime } = \\sum _{i=1}^{L} \\alpha _i h_i$$ (Eq. 7) ",
"The weight is computed by ",
"$$\\alpha _i = \\dfrac{exp(a(h_i,h_N))}{\\sum _{j=1}^{L}exp(a(h_j,h_N))}$$ (Eq. 8) ",
"where $a()$ is the importance model that produces a higher score for $(h_i, h_N)$ if $h_i$ is useful to determine the object pair's relationship. We parametrize this model using another FNN. Note that in our framework, we also allow other augmented features (e.g., the ranking score from the IR system) to enhance the classifier. So the final input to the classifier will be $h_N$ , $h^{\\prime }$ , as well as augmented features."
],
[
"For task C, in addition to an original question (oriQ) and an external comment (relC), the question which relC commented on is also given (relQ). To incorporate this extra information, we consider a multitask learning framework which jointly learns to predict the relationships of the three pairs (oriQ/relQ, oriQ/relC, relQ/relC).",
"Figure 3 shows our framework: the three lower models are separate serialized LSTM-encoders for the three respective object pairs, whereas the upper model is an FNN that takes as input the concatenation of the outputs of three encoders, and predicts the relationships for all three pairs. More specifically, the output layer consists of three softmax layers where each one is intended to predict the relationship of one particular pair.",
"For the overall loss function, we combine three separate loss functions using a heuristic weight vector $\\beta $ that allocates a higher weight to the main task (oriQ-relC relationship prediction) as follows: ",
"$$\\mathcal {L} = \\beta _1 \\mathcal {L}_1 + \\beta _2 \\mathcal {L}_2 + \\beta _3 \\mathcal {L}_3$$ (Eq. 11) ",
"By doing so, we hypothesize that the related tasks can improve the main task by leveraging commonality among all tasks."
],
[
"We evaluate our approach on all three cQA tasks. We use the cQA datasets provided by the Semeval 2016 task . The cQA data is organized as follows: there are 267 original questions, each question has 10 related question, and each related question has 10 comments. Therefore, for task A, there are a total number of 26,700 question-comment pairs. For task B, there are 2,670 question-question pairs. For task C, there are 26,700 question-comment pairs. The test dataset includes 50 questions, 500 related questions and 5,000 comments which do not overlap with the training set. To evaluate the performance, we use mean average precision (MAP) and F1 score."
],
[
"Table 2 shows the initial results using the RNN encoder for different tasks. We observe that the attention model always gets better results than the RNN without attention, especially for task C. However, the RNN model achieves a very low F1 score. For task B, it is even worse than the random baseline. We believe the reason is because for task B, there are only 2,670 pairs for training which is very limited training for a reasonable neural network. For task C, we believe the problem is highly imbalanced data. Since the related comments did not directly comment on the original question, more than $90\\%$ of the comments are labeled as irrelevant to the original question. The low F1 (with high precision and low recall) means our system tends to label most comments as irrelevant. In the following section, we investigate methods to address these issues."
],
[
"One way to improve models trained on limited data is to use external data to pretrain the neural network. We therefore considered two different datasets for this task.",
"Cross-domain: The Stanford natural language inference (SNLI) corpus BIBREF17 has a huge amount of cleaned premise and hypothesis pairs. Unfortunately the pairs are for a different task. The relationship between the premise and hypothesis may be similar to the relation between questions and comments, but may also be different.",
"In-domain: since task A seems has reasonable performance, and the network is also well-trained, we could use it directly to initialize task B.",
"To utilize the data, we first trained the model on each auxiliary data (SNLI or Task A) and then removed the softmax layer. After that, we retrain the network using the target data with a softmax layer that was randomly initialized.",
"For task A, the SNLI cannot improve MAP or F1 scores. Actually it slightly hurts the performance. We surmise that it is probably because the domain is different. Further investigation is needed: for example, we could only use the parameter for embedding layers etc. For task B, the SNLI yields a slight improvement on MAP ( $0.2\\%$ ), and Task A could give ( $1.2\\%$ ) on top of that. No improvement was observed on F1. For task C, pretraining by task A is also better than using SNLI (task A is $1\\%$ better than the baseline, while SNLI is almost the same).",
"In summary, the in-domain pretraining seems better, but overall, the improvement is less than we expected, especially for task B, which only has very limited target data. We will not make a conclusion here since more investigation is needed."
],
[
"As mentioned in Section \"Modeling Question-External Comments\" , we also explored a multitask learning framework that jointly learns to predict the relationships of all three tasks. We set $0.8$ for the main task (task C) and $0.1$ for the other auxiliary tasks. The MAP score did not improve, but F1 increases to $0.1617$ . We believe this is because other tasks have more balanced labels, which improves the shared parameters for task C."
],
[
"There are many sources of external question-answer pairs that could be used in our tasks. For example: WebQuestion (was introduced by the authors of SEMPRE system BIBREF18 ) and The SimpleQuestions dataset . All of them are positive examples for our task and we can easily create negative examples from it. Initial experiments indicate that it is very easy to overfit these obvious negative examples. We believe this is because our negative examples are non-informative for our task and just introduce noise.",
"Since the external data seems to hurt the performance, we try to use the in-domain pairs to enhance task B and task C. For task B, if relative question 1 (rel1) and relative question 2 (rel2) are both relevant to the original question, then we add a positive sample (rel1, rel2, 1). If either rel1 and rel2 is irrelevant and the other is relevant, we add a negative sample (rel1, rel2, 0). After doing this, the samples of task B increase from $2,670$ to $11,810$ . By applying this method, the MAP score increased slightly from $0.5723$ to $0.5789$ but the F1 score improved from $0.4334$ to $0.5860$ .",
"For task C, we used task A's data directly. The results are very similar with a slight improvement on MAP, but large improvement on F1 score from $0.1449$ to $0.2064$ ."
],
[
"To further enhance the system, we incorporate a one hot vector of the original IR ranking as an additional feature into the FNN classifier. Table 3 shows the results. In comparing the models with and without augmented features, we can see large improvement for task B and C. The F1 score for task A degrades slightly but MAP improves. This might be because task A already had a substantial amount of training data."
],
[
"Table 4 gives the final comparison between different models (we only list the MAP score because it is the official score for the challenge). Since the two baseline models did not use any additional data, in this table our system was also restricted to the provided training data. For task A, we can see that if there is enough training data our single system already performs better than a very strong feature-rich based system. For task B, since only limited training data is given, both feature-rich based system and our system are worse than the IR system. For task C, our system also got comparable results with the feature-rich based system. If we do a simple system combination (average the rank score) between our system and the IR system, the combined system will give large gains on tasks B and C. This implies that our system is complimentary with the IR system."
],
[
"In addition to quantitative analysis, it is natural to qualitatively evaluate the performance of the attention mechanism by visualizing the weight distribution of each instance. We randomly picked several instances from the test set in task A, for which the sentence lengths are more moderate for demonstration. These examples are shown in Figure 5 , and categorized into short, long, and noisy sentences for discussion. A darker blue patch refers to a larger weight relative to other words in the same sentence."
],
[
"Figure 5 illustrates two cQA examples whose questions are relatively short. The comments corresponding to these questions are “...snorkeling two days ago off the coast of dukhan...” and “the doha international airport...”. We can observe that our model successfully learns to focus on the most representative part of the question pertaining to classifying the relationship, which is \"place for snorkeling\" for the first example and “place can ... visited in qatar” for the second example."
],
[
"In Figure 5 , we investigate two examples with longer questions, which both contain 63 words. Interestingly, the distribution of weights does not become more uniform; the model still focuses attention on a small number of hot words, for example, “puppy dog for ... mall” and “hectic driving in doha ... car insurance ... quite costly”. Additionally, some words that appear frequently but carry little information for classification are assigned very small weights, such as I/we/my, is/am, like, and to."
],
[
"Due to the open nature of cQA forums, some content is noisy. Figure 5 is an example with excessive usage of question marks. Again, our model exhibits its robustness by allocating very low weights to the noise symbols and therefore excludes the noninformative content."
],
[
"In this paper, we demonstrate that a general RNN encoder framework can be applied to community question answering tasks. By adding a neural attention mechanism, we showed quantitatively and qualitatively that attention can improve the RNN encoder framework. To deal with a more realistic scenario, we expanded the framework to incorporate metadata as augmented inputs to a FNN classifier, and pretrained models on larger datasets, increasing both stability and performance. Our model is consistently better than or comparable to a strong feature-rich baseline system, and is superior to an IR-based system when there is a reasonable amount of training data.",
"Our model is complimentary with an IR-based system that uses vast amounts of external resources but trained for general purposes. By combining the two systems, it exceeds the feature-rich and IR-based system in all three tasks.",
"Moreover, our approach is also language independent. We have also performed preliminary experiments on the Arabic portion of the SemEval-2016 cQA task. The results are competitive with a hand-tuned strong baseline from SemEval-2015.",
"Future work could proceed in two directions: first, we can enrich the existing system by incorporating available metadata and preprocessing data with morphological normalization and out-of-vocabulary mappings; second, we can reinforce our model by carrying out word-by-word and history-aware attention mechanisms instead of attending only when reading the last word."
]
]
} | {
"question": [
"What supplemental tasks are used for multitask learning?",
"Is the improvement actually coming from using an RNN?",
"How much performance gap between their approach and the strong handcrafted method?",
"What is a strong feature-based method?",
"Did they experimnet in other languages?"
],
"question_id": [
"efc65e5032588da4a134d121fe50d49fe8fe5e8c",
"a30958c7123d1ad4723dcfd19d8346ccedb136d5",
"08333e4dd1da7d6b5e9b645d40ec9d502823f5d7",
"bc1bc92920a757d5ec38007a27d0f49cb2dde0d1",
"942eb1f7b243cdcfd47f176bcc71de2ef48a17c4"
],
"nlp_background": [
"five",
"five",
"two",
"two",
"two"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"somewhat",
"somewhat",
"no",
"no",
"no"
],
"search_query": [
"question answering",
"question answering",
"Question Answering",
"Question Answering",
"Question Answering"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Multitask learning is used for the task of predicting relevance of a comment on a different question to a given question, where the supplemental tasks are predicting relevance between the questions, and between the comment and the corresponding question",
"evidence": [
"Automation of cQA forums can be divided into three tasks: question-comment relevance (Task A), question-question relevance (Task B), and question-external comment relevance (Task C). One might think that classic retrieval models like language models for information retrieval BIBREF0 could solve these tasks. However, a big challenge for cQA tasks is that users are used to expressing similar meanings with different words, which creates gaps when matching questions based on common words. Other challenges include informal usage of language, highly diverse content of comments, and variation in the length of both questions and comments.",
"In our cQA tasks, the pair of objects are (question, question) or (question, comment), and the relationship is relevant/irrelevant. The left side of Figure 1 shows one intuitive way to predict relationships using RNNs. Parallel LSTMs encode two objects independently, and then concatenate their outputs as an input to a feed-forward neural network (FNN) with a softmax output layer for classification.",
"For task C, in addition to an original question (oriQ) and an external comment (relC), the question which relC commented on is also given (relQ). To incorporate this extra information, we consider a multitask learning framework which jointly learns to predict the relationships of the three pairs (oriQ/relQ, oriQ/relC, relQ/relC)."
],
"highlighted_evidence": [
"Automation of cQA forums can be divided into three tasks: question-comment relevance (Task A), question-question relevance (Task B), and question-external comment relevance (Task C).",
"In our cQA tasks, the pair of objects are (question, question) or (question, comment), and the relationship is relevant/irrelevant.",
"For task C, in addition to an original question (oriQ) and an external comment (relC), the question which relC commented on is also given (relQ). To incorporate this extra information, we consider a multitask learning framework which jointly learns to predict the relationships of the three pairs (oriQ/relQ, oriQ/relC, relQ/relC)."
]
}
],
"annotation_id": [
"005fda1710dc27880d84605c9bb3971e626fda3b"
],
"worker_id": [
"99669777a05f235adcbdaa21bb372fb9ecc5a542"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"4c8ee0a6a696fcf32952cf3af380a67a2f13d3dc"
],
"worker_id": [
"99669777a05f235adcbdaa21bb372fb9ecc5a542"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "0.007 MAP on Task A, 0.032 MAP on Task B, 0.055 MAP on Task C",
"evidence": [
"FLOAT SELECTED: Table 4: Compared with other systems (bold is best)."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Compared with other systems (bold is best)."
]
}
],
"annotation_id": [
"0e10c370139082b10a811c4b9dd46fb990dc2ea7"
],
"worker_id": [
"99669777a05f235adcbdaa21bb372fb9ecc5a542"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"fd5c3d425ea41f2498d7231e6f3f86aa27294e59"
],
"worker_id": [
"99669777a05f235adcbdaa21bb372fb9ecc5a542"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"Moreover, our approach is also language independent. We have also performed preliminary experiments on the Arabic portion of the SemEval-2016 cQA task. The results are competitive with a hand-tuned strong baseline from SemEval-2015."
],
"highlighted_evidence": [
"We have also performed preliminary experiments on the Arabic portion of the SemEval-2016 cQA task. "
]
}
],
"annotation_id": [
"92d9c65afd196f9731be8244c24e2fa52f2ff870"
],
"worker_id": [
"99669777a05f235adcbdaa21bb372fb9ecc5a542"
]
}
]
} | {
"caption": [
"Figure 1: RNN encoder for related question/comment selection.",
"Figure 2: Neural attention model for related question/comment selection.",
"Figure 3: Joint learning for external comment selection.",
"Figure 4: IR-based system and feature-rich based system.",
"Table 2: The RNN encoder results for cQA tasks (bold is best).",
"Table 3: cQA task results with augmented features (bold is best).",
"Table 4: Compared with other systems (bold is best).",
"Figure 5: Visualization of attention mechanism on short, long, and noisy sentences."
],
"file": [
"3-Figure1-1.png",
"3-Figure2-1.png",
"5-Figure3-1.png",
"5-Figure4-1.png",
"5-Table2-1.png",
"7-Table3-1.png",
"7-Table4-1.png",
"8-Figure5-1.png"
]
} |
1902.09314 | Attentional Encoder Network for Targeted Sentiment Classification | Targeted sentiment classification aims at determining the sentimental tendency towards specific targets. Most of the previous approaches model context and target words with RNN and attention. However, RNNs are difficult to parallelize and truncated backpropagation through time brings difficulty in remembering long-term patterns. To address this issue, this paper proposes an Attentional Encoder Network (AEN) which eschews recurrence and employs attention based encoders for the modeling between context and target. We raise the label unreliability issue and introduce label smoothing regularization. We also apply pre-trained BERT to this task and obtain new state-of-the-art results. Experiments and analysis demonstrate the effectiveness and lightweight of our model. | {
"section_name": [
"Introduction",
"Related Work",
"Proposed Methodology",
"Embedding Layer",
"Attentional Encoder Layer",
"Target-specific Attention Layer",
"Output Layer",
"Regularization and Model Training",
"Datasets and Experimental Settings",
"Model Comparisons",
"Main Results",
"Model Analysis",
"Conclusion"
],
"paragraphs": [
[
"Targeted sentiment classification is a fine-grained sentiment analysis task, which aims at determining the sentiment polarities (e.g., negative, neutral, or positive) of a sentence over “opinion targets” that explicitly appear in the sentence. For example, given a sentence “I hated their service, but their food was great”, the sentiment polarities for the target “service” and “food” are negative and positive respectively. A target is usually an entity or an entity aspect.",
"In recent years, neural network models are designed to automatically learn useful low-dimensional representations from targets and contexts and obtain promising results BIBREF0 , BIBREF1 . However, these neural network models are still in infancy to deal with the fine-grained targeted sentiment classification task.",
"Attention mechanism, which has been successfully used in machine translation BIBREF2 , is incorporated to enforce the model to pay more attention to context words with closer semantic relations with the target. There are already some studies use attention to generate target-specific sentence representations BIBREF3 , BIBREF4 , BIBREF5 or to transform sentence representations according to target words BIBREF6 . However, these studies depend on complex recurrent neural networks (RNNs) as sequence encoder to compute hidden semantics of texts.",
"The first problem with previous works is that the modeling of text relies on RNNs. RNNs, such as LSTM, are very expressive, but they are hard to parallelize and backpropagation through time (BPTT) requires large amounts of memory and computation. Moreover, essentially every training algorithm of RNN is the truncated BPTT, which affects the model's ability to capture dependencies over longer time scales BIBREF7 . Although LSTM can alleviate the vanishing gradient problem to a certain extent and thus maintain long distance information, this usually requires a large amount of training data. Another problem that previous studies ignore is the label unreliability issue, since neutral sentiment is a fuzzy sentimental state and brings difficulty for model learning. As far as we know, we are the first to raise the label unreliability issue in the targeted sentiment classification task.",
"This paper propose an attention based model to solve the problems above. Specifically, our model eschews recurrence and employs attention as a competitive alternative to draw the introspective and interactive semantics between target and context words. To deal with the label unreliability issue, we employ a label smoothing regularization to encourage the model to be less confident with fuzzy labels. We also apply pre-trained BERT BIBREF8 to this task and show our model enhances the performance of basic BERT model. Experimental results on three benchmark datasets show that the proposed model achieves competitive performance and is a lightweight alternative of the best RNN based models.",
"The main contributions of this work are presented as follows:"
],
[
"The research approach of the targeted sentiment classification task including traditional machine learning methods and neural networks methods.",
"Traditional machine learning methods, including rule-based methods BIBREF9 and statistic-based methods BIBREF10 , mainly focus on extracting a set of features like sentiment lexicons features and bag-of-words features to train a sentiment classifier BIBREF11 . The performance of these methods highly depends on the effectiveness of the feature engineering works, which are labor intensive.",
"In recent years, neural network methods are getting more and more attention as they do not need handcrafted features and can encode sentences with low-dimensional word vectors where rich semantic information stained. In order to incorporate target words into a model, Tang et al. tang2016effective propose TD-LSTM to extend LSTM by using two single-directional LSTM to model the left context and right context of the target word respectively. Tang et al. tang2016aspect design MemNet which consists of a multi-hop attention mechanism with an external memory to capture the importance of each context word concerning the given target. Multiple attention is paid to the memory represented by word embeddings to build higher semantic information. Wang et al. wang2016attention propose ATAE-LSTM which concatenates target embeddings with word representations and let targets participate in computing attention weights. Chen et al. chen2017recurrent propose RAM which adopts multiple-attention mechanism on the memory built with bidirectional LSTM and nonlinearly combines the attention results with gated recurrent units (GRUs). Ma et al. ma2017interactive propose IAN which learns the representations of the target and context with two attention networks interactively."
],
[
"Given a context sequence INLINEFORM0 and a target sequence INLINEFORM1 , where INLINEFORM2 is a sub-sequence of INLINEFORM3 . The goal of this model is to predict the sentiment polarity of the sentence INLINEFORM4 over the target INLINEFORM5 .",
"Figure FIGREF9 illustrates the overall architecture of the proposed Attentional Encoder Network (AEN), which mainly consists of an embedding layer, an attentional encoder layer, a target-specific attention layer, and an output layer. Embedding layer has two types: GloVe embedding and BERT embedding. Accordingly, the models are named AEN-GloVe and AEN-BERT."
],
[
"Let INLINEFORM0 to be the pre-trained GloVe BIBREF12 embedding matrix, where INLINEFORM1 is the dimension of word vectors and INLINEFORM2 is the vocabulary size. Then we map each word INLINEFORM3 to its corresponding embedding vector INLINEFORM4 , which is a column in the embedding matrix INLINEFORM5 .",
"BERT embedding uses the pre-trained BERT to generate word vectors of sequence. In order to facilitate the training and fine-tuning of BERT model, we transform the given context and target to “[CLS] + context + [SEP]” and “[CLS] + target + [SEP]” respectively."
],
[
"The attentional encoder layer is a parallelizable and interactive alternative of LSTM and is applied to compute the hidden states of the input embeddings. This layer consists of two submodules: the Multi-Head Attention (MHA) and the Point-wise Convolution Transformation (PCT).",
"Multi-Head Attention (MHA) is the attention that can perform multiple attention function in parallel. Different from Transformer BIBREF13 , we use Intra-MHA for introspective context words modeling and Inter-MHA for context-perceptive target words modeling, which is more lightweight and target is modeled according to a given context.",
"An attention function maps a key sequence INLINEFORM0 and a query sequence INLINEFORM1 to an output sequence INLINEFORM2 : DISPLAYFORM0 ",
" where INLINEFORM0 denotes the alignment function which learns the semantic relevance between INLINEFORM1 and INLINEFORM2 : DISPLAYFORM0 ",
" where INLINEFORM0 are learnable weights.",
"MHA can learn n_head different scores in parallel child spaces and is very powerful for alignments. The INLINEFORM0 outputs are concatenated and projected to the specified hidden dimension INLINEFORM1 , namely, DISPLAYFORM0 ",
" where “ INLINEFORM0 ” denotes vector concatenation, INLINEFORM1 , INLINEFORM2 is the output of the INLINEFORM3 -th head attention and INLINEFORM4 .",
"Intra-MHA, or multi-head self-attention, is a special situation for typical attention mechanism that INLINEFORM0 . Given a context embedding INLINEFORM1 , we can get the introspective context representation INLINEFORM2 by: DISPLAYFORM0 ",
" The learned context representation INLINEFORM0 is aware of long-term dependencies.",
"Inter-MHA is the generally used form of attention mechanism that INLINEFORM0 is different from INLINEFORM1 . Given a context embedding INLINEFORM2 and a target embedding INLINEFORM3 , we can get the context-perceptive target representation INLINEFORM4 by: DISPLAYFORM0 ",
"After this interactive procedure, each given target word INLINEFORM0 will have a composed representation selected from context embeddings INLINEFORM1 . Then we get the context-perceptive target words modeling INLINEFORM2 .",
"A Point-wise Convolution T ransformation (PCT) can transform contextual information gathered by the MHA. Point-wise means that the kernel sizes are 1 and the same transformation is applied to every single token belonging to the input. Formally, given a input sequence INLINEFORM0 , PCT is defined as: DISPLAYFORM0 ",
" where INLINEFORM0 stands for the ELU activation, INLINEFORM1 is the convolution operator, INLINEFORM2 and INLINEFORM3 are the learnable weights of the two convolutional kernels, INLINEFORM4 and INLINEFORM5 are biases of the two convolutional kernels.",
"Given INLINEFORM0 and INLINEFORM1 , PCTs are applied to get the output hidden states of the attentional encoder layer INLINEFORM2 and INLINEFORM3 by: DISPLAYFORM0 "
],
[
"After we obtain the introspective context representation INLINEFORM0 and the context-perceptive target representation INLINEFORM1 , we employ another MHA to obtain the target-specific context representation INLINEFORM2 by: DISPLAYFORM0 ",
" The multi-head attention function here also has its independent parameters."
],
[
"We get the final representations of the previous outputs by average pooling, concatenate them as the final comprehensive representation INLINEFORM0 , and use a full connected layer to project the concatenated vector into the space of the targeted INLINEFORM1 classes. DISPLAYFORM0 ",
" where INLINEFORM0 is the predicted sentiment polarity distribution, INLINEFORM1 and INLINEFORM2 are learnable parameters."
],
[
"Since neutral sentiment is a very fuzzy sentimental state, training samples which labeled neutral are unreliable. We employ a Label Smoothing Regularization (LSR) term in the loss function. which penalizes low entropy output distributions BIBREF14 . LSR can reduce overfitting by preventing a network from assigning the full probability to each training example during training, replaces the 0 and 1 targets for a classifier with smoothed values like 0.1 or 0.9.",
"For a training sample INLINEFORM0 with the original ground-truth label distribution INLINEFORM1 , we replace INLINEFORM2 with DISPLAYFORM0 ",
" where INLINEFORM0 is the prior distribution over labels , and INLINEFORM1 is the smoothing parameter. In this paper, we set the prior label distribution to be uniform INLINEFORM2 .",
"LSR is equivalent to the KL divergence between the prior label distribution INLINEFORM0 and the network's predicted distribution INLINEFORM1 . Formally, LSR term is defined as: DISPLAYFORM0 ",
"The objective function (loss function) to be optimized is the cross-entropy loss with INLINEFORM0 and INLINEFORM1 regularization, which is defined as: DISPLAYFORM0 ",
" where INLINEFORM0 is the ground truth represented as a one-hot vector, INLINEFORM1 is the predicted sentiment distribution vector given by the output layer, INLINEFORM2 is the coefficient for INLINEFORM3 regularization term, and INLINEFORM4 is the parameter set."
],
[
"We conduct experiments on three datasets: SemEval 2014 Task 4 BIBREF15 dataset composed of Restaurant reviews and Laptop reviews, and ACL 14 Twitter dataset gathered by Dong et al. dong2014adaptive. These datasets are labeled with three sentiment polarities: positive, neutral and negative. Table TABREF31 shows the number of training and test instances in each category.",
"Word embeddings in AEN-GloVe do not get updated in the learning process, but we fine-tune pre-trained BERT in AEN-BERT. Embedding dimension INLINEFORM0 is 300 for GloVe and is 768 for pre-trained BERT. Dimension of hidden states INLINEFORM1 is set to 300. The weights of our model are initialized with Glorot initialization BIBREF16 . During training, we set label smoothing parameter INLINEFORM2 to 0.2 BIBREF14 , the coefficient INLINEFORM3 of INLINEFORM4 regularization item is INLINEFORM5 and dropout rate is 0.1. Adam optimizer BIBREF17 is applied to update all the parameters. We adopt the Accuracy and Macro-F1 metrics to evaluate the performance of the model."
],
[
"In order to comprehensively evaluate and analysis the performance of AEN-GloVe, we list 7 baseline models and design 4 ablations of AEN-GloVe. We also design a basic BERT-based model to evaluate the performance of AEN-BERT.",
" ",
"Non-RNN based baselines:",
" INLINEFORM0 Feature-based SVM BIBREF18 is a traditional support vector machine based model with extensive feature engineering.",
" INLINEFORM0 Rec-NN BIBREF0 firstly uses rules to transform the dependency tree and put the opinion target at the root, and then learns the sentence representation toward target via semantic composition using Recursive NNs.",
" INLINEFORM0 MemNet BIBREF19 uses multi-hops of attention layers on the context word embeddings for sentence representation to explicitly captures the importance of each context word.",
" ",
"RNN based baselines:",
" INLINEFORM0 TD-LSTM BIBREF1 extends LSTM by using two LSTM networks to model the left context with target and the right context with target respectively. The left and right target-dependent representations are concatenated for predicting the sentiment polarity of the target.",
" INLINEFORM0 ATAE-LSTM BIBREF3 strengthens the effect of target embeddings, which appends the target embeddings with each word embeddings and use LSTM with attention to get the final representation for classification.",
" INLINEFORM0 IAN BIBREF4 learns the representations of the target and context with two LSTMs and attentions interactively, which generates the representations for targets and contexts with respect to each other.",
" INLINEFORM0 RAM BIBREF5 strengthens MemNet by representing memory with bidirectional LSTM and using a gated recurrent unit network to combine the multiple attention outputs for sentence representation.",
" ",
"AEN-GloVe ablations:",
" INLINEFORM0 AEN-GloVe w/o PCT ablates PCT module.",
" INLINEFORM0 AEN-GloVe w/o MHA ablates MHA module.",
" INLINEFORM0 AEN-GloVe w/o LSR ablates label smoothing regularization.",
" INLINEFORM0 AEN-GloVe-BiLSTM replaces the attentional encoder layer with two bidirectional LSTM.",
" ",
"Basic BERT-based model:",
" INLINEFORM0 BERT-SPC feeds sequence “[CLS] + context + [SEP] + target + [SEP]” into the basic BERT model for sentence pair classification task."
],
[
"Table TABREF34 shows the performance comparison of AEN with other models. BERT-SPC and AEN-BERT obtain substantial accuracy improvements, which shows the power of pre-trained BERT on small-data task. The overall performance of AEN-BERT is better than BERT-SPC, which suggests that it is important to design a downstream network customized to a specific task. As the prior knowledge in the pre-trained BERT is not specific to any particular domain, further fine-tuning on the specific task is necessary for releasing the true power of BERT.",
"The overall performance of TD-LSTM is not good since it only makes a rough treatment of the target words. ATAE-LSTM, IAN and RAM are attention based models, they stably exceed the TD-LSTM method on Restaurant and Laptop datasets. RAM is better than other RNN based models, but it does not perform well on Twitter dataset, which might because bidirectional LSTM is not good at modeling small and ungrammatical text.",
"Feature-based SVM is still a competitive baseline, but relying on manually-designed features. Rec-NN gets the worst performances among all neural network baselines as dependency parsing is not guaranteed to work well on ungrammatical short texts such as tweets and comments. Like AEN, MemNet also eschews recurrence, but its overall performance is not good since it does not model the hidden semantic of embeddings, and the result of the last attention is essentially a linear combination of word embeddings."
],
[
"As shown in Table TABREF34 , the performances of AEN-GloVe ablations are incomparable with AEN-GloVe in both accuracy and macro-F1 measure. This result shows that all of these discarded components are crucial for a good performance. Comparing the results of AEN-GloVe and AEN-GloVe w/o LSR, we observe that the accuracy of AEN-GloVe w/o LSR drops significantly on all three datasets. We could attribute this phenomenon to the unreliability of the training samples with neutral sentiment. The overall performance of AEN-GloVe and AEN-GloVe-BiLSTM is relatively close, AEN-GloVe performs better on the Restaurant dataset. More importantly, AEN-GloVe has fewer parameters and is easier to parallelize.",
"To figure out whether the proposed AEN-GloVe is a lightweight alternative of recurrent models, we study the model size of each model on the Restaurant dataset. Statistical results are reported in Table TABREF37 . We implement all the compared models base on the same source code infrastructure, use the same hyperparameters, and run them on the same GPU .",
"RNN-based and BERT-based models indeed have larger model size. ATAE-LSTM, IAN, RAM, and AEN-GloVe-BiLSTM are all attention based RNN models, memory optimization for these models will be more difficult as the encoded hidden states must be kept simultaneously in memory in order to perform attention mechanisms. MemNet has the lowest model size as it only has one shared attention layer and two linear layers, it does not calculate hidden states of word embeddings. AEN-GloVe's lightweight level ranks second, since it takes some more parameters than MemNet in modeling hidden states of sequences. As a comparison, the model size of AEN-GloVe-BiLSTM is more than twice that of AEN-GloVe, but does not bring any performance improvements."
],
[
"In this work, we propose an attentional encoder network for the targeted sentiment classification task. which employs attention based encoders for the modeling between context and target. We raise the the label unreliability issue add a label smoothing regularization to encourage the model to be less confident with fuzzy labels. We also apply pre-trained BERT to this task and obtain new state-of-the-art results. Experiments and analysis demonstrate the effectiveness and lightweight of the proposed model."
]
]
} | {
"question": [
"Do they use multi-attention heads?",
"How big is their model?",
"How is their model different from BERT?"
],
"question_id": [
"9bffc9a9c527e938b2a95ba60c483a916dbd1f6b",
"8434974090491a3c00eed4f22a878f0b70970713",
"b67420da975689e47d3ea1c12b601851018c4071"
],
"nlp_background": [
"five",
"five",
"five"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"The attentional encoder layer is a parallelizable and interactive alternative of LSTM and is applied to compute the hidden states of the input embeddings. This layer consists of two submodules: the Multi-Head Attention (MHA) and the Point-wise Convolution Transformation (PCT)."
],
"highlighted_evidence": [
"This layer consists of two submodules: the Multi-Head Attention (MHA) and the Point-wise Convolution Transformation (PCT)."
]
}
],
"annotation_id": [
"0064ff0d9e06a701f36bb4baabb7d086c3311fd6"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Proposed model has 1.16 million parameters and 11.04 MB.",
"evidence": [
"To figure out whether the proposed AEN-GloVe is a lightweight alternative of recurrent models, we study the model size of each model on the Restaurant dataset. Statistical results are reported in Table TABREF37 . We implement all the compared models base on the same source code infrastructure, use the same hyperparameters, and run them on the same GPU .",
"FLOAT SELECTED: Table 3: Model sizes. Memory footprints are evaluated on the Restaurant dataset. Lowest 2 are in bold."
],
"highlighted_evidence": [
"Statistical results are reported in Table TABREF37 .",
"FLOAT SELECTED: Table 3: Model sizes. Memory footprints are evaluated on the Restaurant dataset. Lowest 2 are in bold."
]
}
],
"annotation_id": [
"dfb36457161c897a38f62432f6193613b02071e8"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"overall architecture of the proposed Attentional Encoder Network (AEN), which mainly consists of an embedding layer, an attentional encoder layer, a target-specific attention layer, and an output layer."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Figure FIGREF9 illustrates the overall architecture of the proposed Attentional Encoder Network (AEN), which mainly consists of an embedding layer, an attentional encoder layer, a target-specific attention layer, and an output layer. Embedding layer has two types: GloVe embedding and BERT embedding. Accordingly, the models are named AEN-GloVe and AEN-BERT."
],
"highlighted_evidence": [
"Figure FIGREF9 illustrates the overall architecture of the proposed Attentional Encoder Network (AEN), which mainly consists of an embedding layer, an attentional encoder layer, a target-specific attention layer, and an output layer. Embedding layer has two types: GloVe embedding and BERT embedding. Accordingly, the models are named AEN-GloVe and AEN-BERT."
]
}
],
"annotation_id": [
"5cfeb55daf47a1b7845791e8c4a7ed3da8a2ccfd"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Figure 1: Overall architecture of the proposed AEN.",
"Table 1: Statistics of the datasets.",
"Table 2: Main results. The results of baseline models are retrieved from published papers. Top 2 scores are in bold.",
"Table 3: Model sizes. Memory footprints are evaluated on the Restaurant dataset. Lowest 2 are in bold."
],
"file": [
"3-Figure1-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png"
]
} |
1904.03339 | ThisIsCompetition at SemEval-2019 Task 9: BERT is unstable for out-of-domain samples | This paper describes our system, Joint Encoders for Stable Suggestion Inference (JESSI), for the SemEval 2019 Task 9: Suggestion Mining from Online Reviews and Forums. JESSI is a combination of two sentence encoders: (a) one using multiple pre-trained word embeddings learned from log-bilinear regression (GloVe) and translation (CoVe) models, and (b) one on top of word encodings from a pre-trained deep bidirectional transformer (BERT). We include a domain adversarial training module when training for out-of-domain samples. Our experiments show that while BERT performs exceptionally well for in-domain samples, several runs of the model show that it is unstable for out-of-domain samples. The problem is mitigated tremendously by (1) combining BERT with a non-BERT encoder, and (2) using an RNN-based classifier on top of BERT. Our final models obtained second place with 77.78\% F-Score on Subtask A (i.e. in-domain) and achieved an F-Score of 79.59\% on Subtask B (i.e. out-of-domain), even without using any additional external data. | {
"section_name": [
"Introduction",
"Joint Encoders for Stable Suggestion Inference",
"Experiments",
"Conclusion",
"Acknowledgement"
],
"paragraphs": [
[
"Opinion mining BIBREF0 is a huge field that covers many NLP tasks ranging from sentiment analysis BIBREF1 , aspect extraction BIBREF2 , and opinion summarization BIBREF3 , among others. Despite the vast literature on opinion mining, the task on suggestion mining has given little attention. Suggestion mining BIBREF4 is the task of collecting and categorizing suggestions about a certain product. This is important because while opinions indirectly give hints on how to improve a product (e.g. analyzing reviews), suggestions are direct improvement requests (e.g. tips, advice, recommendations) from people who have used the product.",
"To this end, BIBREF5 organized a shared task specifically on suggestion mining called SemEval 2019 Task 9: Suggestion Mining from Online Reviews and Forums. The shared task is composed of two subtasks, Subtask A and B. In Subtask A, systems are tasked to predict whether a sentence of a certain domain (i.e. electronics) entails a suggestion or not given a training data of the same domain. In Subtask B, systems are tasked to do suggestion prediction of a sentence from another domain (i.e. hotels). Organizers observed four main challenges: (a) sparse occurrences of suggestions; (b) figurative expressions; (c) different domains; and (d) complex sentences. While previous attempts BIBREF6 , BIBREF4 , BIBREF7 made use of human-engineered features to solve this problem, the goal of the shared task is to leverage the advancements seen on neural networks, by providing a larger dataset to be used on data-intensive models to achieve better performance.",
"This paper describes our system JESSI (Joint Encoders for Stable Suggestion Inference). JESSI is built as a combination of two neural-based encoders using multiple pre-trained word embeddings, including BERT BIBREF8 , a pre-trained deep bidirectional transformer that is recently reported to perform exceptionally well across several tasks. The main intuition behind JESSI comes from our finding that although BERT gives exceptional performance gains when applied to in-domain samples, it becomes unstable when applied to out-of-domain samples, even when using a domain adversarial training BIBREF9 module. This problem is mitigated using two tricks: (1) jointly training BERT with a CNN-based encoder, and (2) using an RNN-based encoder on top of BERT before feeding to the classifier.",
"JESSI is trained using only the datasets given on the shared task, without using any additional external data. Despite this, JESSI performs second on Subtask A with an F1 score of 77.78% among 33 other team submissions. It also performs well on Subtask B with an F1 score of 79.59%."
],
[
"We present our model JESSI, which stands for Joint Encoders for Stable Suggestion Inference, shown in Figure FIGREF4 . Given a sentence INLINEFORM0 , JESSI returns a binary suggestion label INLINEFORM1 . JESSI consists of four important components: (1) A BERT-based encoder that leverages general knowledge acquired from a large pre-trained language model, (2) A CNN-based encoder that learns task-specific sentence representations, (3) an MLP classifier that predicts the label given the joint encodings, and (4) a domain adversarial training module that prevents the model to distinguish between the two domains."
],
[
"In this section, we show our results and experiments. We denote JESSI-A as our model for Subtask A (i.e., BERT INLINEFORM0 CNN+CNN INLINEFORM1 Att), and JESSI-B as our model for Subtask B (i.e., BERT INLINEFORM2 BiSRU+CNN INLINEFORM3 Att+DomAdv). The performance of the models is measured and compared using the F1-score."
],
[
"We presented JESSI (Joint Encoders for Stable Suggestion Inference), our system for the SemEval 2019 Task 9: Suggestion Mining from Online Reviews and Forums. JESSI builds upon jointly combined encoders, borrowing pre-trained knowledge from a language model BERT and a translation model CoVe. We found that BERT alone performs bad and unstably when tested on out-of-domain samples. We mitigate the problem by appending an RNN-based sentence encoder above BERT, and jointly combining a CNN-based encoder. Results from the shared task show that JESSI performs competitively among participating models, obtaining second place on Subtask A with an F-Score of 77.78%. It also performs well on Subtask B, with an F-Score of 79.59%, even without using any additional external data."
],
[
"This research was supported by the MSIT (Ministry of Science ICT), Korea, under (National Program for Excellence in SW) (2015-0-00910) and (Artificial Intelligence Contact Center Solution) (2018-0-00605) supervised by the IITP(Institute for Information & Communications Technology Planning & Evaluation) "
]
]
} | {
"question": [
"What datasets were used?",
"How did they do compared to other teams?"
],
"question_id": [
"01d91d356568fca79e47873bd0541bd22ba66ec0",
"37e45a3439b048a80c762418099a183b05772e6a"
],
"nlp_background": [
"",
""
],
"topic_background": [
"",
""
],
"paper_read": [
"",
""
],
"search_query": [
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"datasets given on the shared task, without using any additional external data"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"JESSI is trained using only the datasets given on the shared task, without using any additional external data. Despite this, JESSI performs second on Subtask A with an F1 score of 77.78% among 33 other team submissions. It also performs well on Subtask B with an F1 score of 79.59%."
],
"highlighted_evidence": [
"JESSI is trained using only the datasets given on the shared task, without using any additional external data."
]
}
],
"annotation_id": [
"0069f8cf0fca14e46df2563259a1a828fa24b1ce"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"second on Subtask A with an F1 score of 77.78% among 33 other team submissions",
"performs well on Subtask B with an F1 score of 79.59%"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"JESSI is trained using only the datasets given on the shared task, without using any additional external data. Despite this, JESSI performs second on Subtask A with an F1 score of 77.78% among 33 other team submissions. It also performs well on Subtask B with an F1 score of 79.59%."
],
"highlighted_evidence": [
"Despite this, JESSI performs second on Subtask A with an F1 score of 77.78% among 33 other team submissions. It also performs well on Subtask B with an F1 score of 79.59%."
]
}
],
"annotation_id": [
"bdbc45e64dbda411c404ef772fb32c4aad1aafb5"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Figure 1: The overall architecture of JESSI for Subtask B. The thinner arrows correspond to the forward propagations, while the thicker arrows correspond to the backward propagations, where gradient calculations are indicated. For Subtask A, a CNN encoder is used instead of the BiSRU encoder, and the domain adversarial training module is not used.",
"Table 1: Dataset Statistics",
"Table 2: Ablation results for both subtasks using the provided trial sets. The + denotes a replacement of the BERT-based encoder, while the – denotes a removal of a specific component.",
"Table 3: Summary statistics of the F-Scores of 10 runs of different models on the trial set of Subtask B when doing a 10-fold validation over the available training data. All models include the domain adversarial training module (+DOMADV), which is omitted for brevity.",
"Table 4: F-Scores of JESSI and top three models for each subtask. Due to time constraints, we were not able to submit JESSI-B during the competition. For clarity, we also show our final official submission (CNN→ATT+DOMADV).",
"Figure 2: Accuracy over various input sentence length on the test set."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"5-Table4-1.png",
"5-Figure2-1.png"
]
} |
1910.11769 | DENS: A Dataset for Multi-class Emotion Analysis | We introduce a new dataset for multi-class emotion analysis from long-form narratives in English. The Dataset for Emotions of Narrative Sequences (DENS) was collected from both classic literature available on Project Gutenberg and modern online narratives available on Wattpad, annotated using Amazon Mechanical Turk. A number of statistics and baseline benchmarks are provided for the dataset. Of the tested techniques, we find that the fine-tuning of a pre-trained BERT model achieves the best results, with an average micro-F1 score of 60.4%. Our results show that the dataset provides a novel opportunity in emotion analysis that requires moving beyond existing sentence-level techniques. | {
"section_name": [
"Introduction",
"Background",
"Dataset",
"Dataset ::: Plutchik’s Wheel of Emotions",
"Dataset ::: Passage Selection",
"Dataset ::: Mechanical Turk (MTurk)",
"Dataset ::: Dataset Statistics",
"Benchmarks",
"Benchmarks ::: Bag-of-Words-based Benchmarks",
"Benchmarks ::: Doc2Vec + SVM",
"Benchmarks ::: Hierarchical RNN",
"Benchmarks ::: Bi-directional RNN and Self-Attention (BiRNN + Self-Attention)",
"Benchmarks ::: ELMo embedding and Bi-directional RNN (ELMo + BiRNN)",
"Benchmarks ::: Fine-tuned BERT",
"Conclusion",
"Appendices ::: Sample Data"
],
"paragraphs": [
[
"Humans experience a variety of complex emotions in daily life. These emotions are heavily reflected in our language, in both spoken and written forms.",
"Many recent advances in natural language processing on emotions have focused on product reviews BIBREF0 and tweets BIBREF1, BIBREF2. These datasets are often limited in length (e.g. by the number of words in tweets), purpose (e.g. product reviews), or emotional spectrum (e.g. binary classification).",
"Character dialogues and narratives in storytelling usually carry strong emotions. A memorable story is often one in which the emotional journey of the characters resonates with the reader. Indeed, emotion is one of the most important aspects of narratives. In order to characterize narrative emotions properly, we must move beyond binary constraints (e.g. good or bad, happy or sad).",
"In this paper, we introduce the Dataset for Emotions of Narrative Sequences (DENS) for emotion analysis, consisting of passages from long-form fictional narratives from both classic literature and modern stories in English. The data samples consist of self-contained passages that span several sentences and a variety of subjects. Each sample is annotated by using one of 9 classes and an indicator for annotator agreement."
],
[
"Using the categorical basic emotion model BIBREF3, BIBREF4, BIBREF5 studied creating lexicons from tweets for use in emotion analysis. Recently, BIBREF1, BIBREF6 and BIBREF2 proposed shared-tasks for multi-class emotion analysis based on tweets.",
"Fewer works have been reported on understanding emotions in narratives. Emotional Arc BIBREF7 is one recent advance in this direction. The work used lexicons and unsupervised learning methods based on unlabelled passages from titles in Project Gutenberg.",
"For labelled datasets on narratives, BIBREF8 provided a sentence-level annotated corpus of childrens' stories and BIBREF9 provided phrase-level annotations on selected Project Gutenberg titles.",
"To the best of our knowledge, the dataset in this work is the first to provide multi-class emotion labels on passages, selected from both Project Gutenberg and modern narratives. The dataset is available upon request for non-commercial, research only purposes."
],
[
"In this section, we describe the process used to collect and annotate the dataset."
],
[
"The dataset is annotated based on a modified Plutchik’s wheel of emotions.",
"The original Plutchik’s wheel consists of 8 primary emotions: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Trust, Disgust. In addition, more complex emotions can be formed by combing two basic emotions. For example, Love is defined as a combination of Joy and Trust (Fig. 1).",
"The intensity of an emotion is also captured in Plutchik's wheel. For example, the primary emotion of Anger can vary between Annoyance (mild) and Rage (intense).",
"We conducted an initial survey based on 100 stories with a significant fraction sampled from the romance genre. We asked readers to identify the major emotion exhibited in each story from a choice of the original 8 primary emotions.",
"We found that readers have significant difficulty in identifying Trust as an emotion associated with romantic stories. Hence, we modified our annotation scheme by removing Trust and adding Love. We also added the Neutral category to denote passages that do not exhibit any emotional content.",
"The final annotation categories for the dataset are: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Love, Disgust, Neutral."
],
[
"We selected both classic and modern narratives in English for this dataset. The modern narratives were sampled based on popularity from Wattpad. We parsed selected narratives into passages, where a passage is considered to be eligible for annotation if it contained between 40 and 200 tokens.",
"In long-form narratives, many non-conversational passages are intended for transition or scene introduction, and may not carry any emotion. We divided the eligible passages into two parts, and one part was pruned using selected emotion-rich but ambiguous lexicons such as cry, punch, kiss, etc.. Then we mixed this pruned part with the unpruned part for annotation in order to reduce the number of neutral passages. See Appendix SECREF25 for the lexicons used."
],
[
"MTurk was set up using the standard sentiment template and instructed the crowd annotators to `pick the best/major emotion embodied in the passage'.",
"We further provided instructions to clarify the intensity of an emotion, such as: “Rage/Annoyance is a form of Anger”, “Serenity/Ecstasy is a form of Joy”, and “Love includes Romantic/Family/Friendship”, along with sample passages.",
"We required all annotators have a `master' MTurk qualification. Each passage was labelled by 3 unique annotators. Only passages with a majority agreement between annotators were accepted as valid. This is equivalent to a Fleiss's $\\kappa $ score of greater than $0.4$.",
"For passages without majority agreement between annotators, we consolidated their labels using in-house data annotators who are experts in narrative content. A passage is accepted as valid if the in-house annotator's label matched any one of the MTurk annotators' labels. The remaining passages are discarded. We provide the fraction of annotator agreement for each label in the dataset.",
"Though passages may lose some emotional context when read independently of the complete narrative, we believe annotator agreement on our dataset supports the assertion that small excerpts can still convey coherent emotions.",
"During the annotation process, several annotators had suggested for us to include additional emotions such as confused, pain, and jealousy, which are common to narratives. As they were not part of the original Plutchik’s wheel, we decided to not include them. An interesting future direction is to study the relationship between emotions such as ‘pain versus sadness’ or ‘confused versus surprise’ and improve the emotion model for narratives."
],
[
"The dataset contains a total of 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words.",
"The vocabulary size is 28K (when lowercased). It contains over 1600 unique titles across multiple categories, including 88 titles (1520 passages) from Project Gutenberg. All of the modern narratives were written after the year 2000, with notable amount of themes in coming-of-age, strong-female-lead, and LGBTQ+. The genre distribution is listed in Table TABREF8.",
"In the final dataset, 21.0% of the data has consensus between all annotators, 73.5% has majority agreement, and 5.48% has labels assigned after consultation with in-house annotators.",
"The distribution of data points over labels with top lexicons (lower-cased, normalized) is shown in Table TABREF9. Note that the Disgust category is very small and should be discarded. Furthermore, we suspect that the data labelled as Surprise may be noisier than other categories and should be discarded as well.",
"Table TABREF10 shows a few examples labelled data from classic titles. More examples can be found in Table TABREF26 in the Appendix SECREF27."
],
[
"We performed benchmark experiments on the dataset using several different algorithms. In all experiments, we have discarded the data labelled with Surprise and Disgust.",
"We pre-processed the data by using the SpaCy pipeline. We masked out named entities with entity-type specific placeholders to reduce the chance of benchmark models utilizing named entities as a basis for classification.",
"Benchmark results are shown in Table TABREF17. The dataset is approximately balanced after discarding the Surprise and Disgust classes. We report the average micro-F1 scores, with 5-fold cross validation for each technique.",
"We provide a brief overview of each benchmark experiment below. Among all of the benchmarks, Bidirectional Encoder Representations from Transformers (BERT) BIBREF11 achieved the best performance with a 0.604 micro-F1 score.",
"Overall, we observed that deep-learning based techniques performed better than lexical based methods. This suggests that a method which attends to context and themes could do well on the dataset."
],
[
"We computed bag-of-words-based benchmarks using the following methods:",
"Classification with TF-IDF + Linear SVM (TF-IDF + SVM)",
"Classification with Depeche++ Emotion lexicons BIBREF12 + Linear SVM (Depeche + SVM)",
"Classification with NRC Emotion lexicons BIBREF13, BIBREF14 + Linear SVM (NRC + SVM)",
"Combination of TF-IDF and NRC Emotion lexicons (TF-NRC + SVM)"
],
[
"We also used simple classification models with learned embeddings. We trained a Doc2Vec model BIBREF15 using the dataset and used the embedding document vectors as features for a linear SVM classifier."
],
[
"For this benchmark, we considered a Hierarchical RNN, following BIBREF16. We used two BiLSTMs BIBREF17 with 256 units each to model sentences and documents. The tokens of a sentence were processed independently of other sentence tokens. For each direction in the token-level BiLSTM, the last outputs were concatenated and fed into the sentence-level BiLSTM as inputs.",
"The outputs of the BiLSTM were connected to 2 dense layers with 256 ReLU units and a Softmax layer. We initialized tokens with publicly available embeddings trained with GloVe BIBREF18. Sentence boundaries were provided by SpaCy. Dropout was applied to the dense hidden layers during training."
],
[
"One challenge with RNN-based solutions for text classification is finding the best way to combine word-level representations into higher-level representations.",
"Self-attention BIBREF19, BIBREF20, BIBREF21 has been adapted to text classification, providing improved interpretability and performance. We used BIBREF20 as the basis of this benchmark.",
"The benchmark used a layered Bi-directional RNN (60 units) with GRU cells and a dense layer. Both self-attention layers were 60 units in size and cross-entropy was used as the cost function.",
"Note that we have omitted the orthogonal regularizer term, since this dataset is relatively small compared to the traditional datasets used for training such a model. We did not observe any significant performance gain while using the regularizer term in our experiments."
],
[
"Deep Contextualized Word Representations (ELMo) BIBREF22 have shown recent success in a number of NLP tasks. The unsupervised nature of the language model allows it to utilize a large amount of available unlabelled data in order to learn better representations of words.",
"We used the pre-trained ELMo model (v2) available on Tensorhub for this benchmark. We fed the word embeddings of ELMo as input into a one layer Bi-directional RNN (16 units) with GRU cells (with dropout) and a dense layer. Cross-entropy was used as the cost function."
],
[
"Bidirectional Encoder Representations from Transformers (BERT) BIBREF11 has achieved state-of-the-art results on several NLP tasks, including sentence classification.",
"We used the fine-tuning procedure outlined in the original work to adapt the pre-trained uncased BERT$_\\textrm {{\\scriptsize LARGE}}$ to a multi-class passage classification task. This technique achieved the best result among our benchmarks, with an average micro-F1 score of 60.4%."
],
[
"We introduce DENS, a dataset for multi-class emotion analysis from long-form narratives in English. We provide a number of benchmark results based on models ranging from bag-of-word models to methods based on pre-trained language models (ELMo and BERT).",
"Our benchmark results demonstrate that this dataset provides a novel challenge in emotion analysis. The results also demonstrate that attention-based models could significantly improve performance on classification tasks such as emotion analysis.",
"Interesting future directions for this work include: 1. incorporating common-sense knowledge into emotion analysis to capture semantic context and 2. using few-shot learning to bootstrap and improve performance of underrepresented emotions.",
"Finally, as narrative passages often involve interactions between multiple emotions, one avenue for future datasets could be to focus on the multi-emotion complexities of human language and their contextual interactions."
],
[
"Table TABREF26 shows sample passages from classic titles with corresponding labels."
]
]
} | {
"question": [
"Which tested technique was the worst performer?",
"How many emotions do they look at?",
"What are the baseline benchmarks?",
"What is the size of this dataset?",
"How many annotators were there?"
],
"question_id": [
"a4e66e842be1438e5cd8d7cb2a2c589f494aee27",
"cb78e280e3340b786e81636431834b75824568c3",
"2941874356e98eb2832ba22eae9cb08ec8ce0308",
"4e50e9965059899d15d3c3a0c0a2d73e0c5802a0",
"67d8e50ddcc870db71c94ad0ad7f8a59a6c67ca6"
],
"nlp_background": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"search_query": [
"dataset",
"dataset",
"dataset",
"dataset",
"dataset"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Depeche + SVM"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"FLOAT SELECTED: Table 4: Benchmark results (averaged 5-fold cross validation)",
"We computed bag-of-words-based benchmarks using the following methods:",
"Classification with TF-IDF + Linear SVM (TF-IDF + SVM)",
"Classification with Depeche++ Emotion lexicons BIBREF12 + Linear SVM (Depeche + SVM)",
"Classification with NRC Emotion lexicons BIBREF13, BIBREF14 + Linear SVM (NRC + SVM)",
"Combination of TF-IDF and NRC Emotion lexicons (TF-NRC + SVM)"
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Benchmark results (averaged 5-fold cross validation)",
"We computed bag-of-words-based benchmarks using the following methods:\n\nClassification with TF-IDF + Linear SVM (TF-IDF + SVM)\n\nClassification with Depeche++ Emotion lexicons BIBREF12 + Linear SVM (Depeche + SVM)\n\nClassification with NRC Emotion lexicons BIBREF13, BIBREF14 + Linear SVM (NRC + SVM)\n\nCombination of TF-IDF and NRC Emotion lexicons (TF-NRC + SVM)"
]
}
],
"annotation_id": [
"42eb0c70a3fc181f2418a7a3d55c836817cc4d8b"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "9",
"evidence": [
"The final annotation categories for the dataset are: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Love, Disgust, Neutral."
],
"highlighted_evidence": [
"The final annotation categories for the dataset are: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Love, Disgust, Neutral"
]
}
],
"annotation_id": [
"008f3d1972460817cb88951faf690c344574e4af"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"TF-IDF + SVM",
"Depeche + SVM",
"NRC + SVM",
"TF-NRC + SVM",
"Doc2Vec + SVM",
" Hierarchical RNN",
"BiRNN + Self-Attention",
"ELMo + BiRNN",
" Fine-tuned BERT"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We computed bag-of-words-based benchmarks using the following methods:",
"Classification with TF-IDF + Linear SVM (TF-IDF + SVM)",
"Classification with Depeche++ Emotion lexicons BIBREF12 + Linear SVM (Depeche + SVM)",
"Classification with NRC Emotion lexicons BIBREF13, BIBREF14 + Linear SVM (NRC + SVM)",
"Combination of TF-IDF and NRC Emotion lexicons (TF-NRC + SVM)",
"Benchmarks ::: Doc2Vec + SVM",
"We also used simple classification models with learned embeddings. We trained a Doc2Vec model BIBREF15 using the dataset and used the embedding document vectors as features for a linear SVM classifier.",
"Benchmarks ::: Hierarchical RNN",
"For this benchmark, we considered a Hierarchical RNN, following BIBREF16. We used two BiLSTMs BIBREF17 with 256 units each to model sentences and documents. The tokens of a sentence were processed independently of other sentence tokens. For each direction in the token-level BiLSTM, the last outputs were concatenated and fed into the sentence-level BiLSTM as inputs.",
"The outputs of the BiLSTM were connected to 2 dense layers with 256 ReLU units and a Softmax layer. We initialized tokens with publicly available embeddings trained with GloVe BIBREF18. Sentence boundaries were provided by SpaCy. Dropout was applied to the dense hidden layers during training.",
"Benchmarks ::: Bi-directional RNN and Self-Attention (BiRNN + Self-Attention)",
"One challenge with RNN-based solutions for text classification is finding the best way to combine word-level representations into higher-level representations.",
"Self-attention BIBREF19, BIBREF20, BIBREF21 has been adapted to text classification, providing improved interpretability and performance. We used BIBREF20 as the basis of this benchmark.",
"The benchmark used a layered Bi-directional RNN (60 units) with GRU cells and a dense layer. Both self-attention layers were 60 units in size and cross-entropy was used as the cost function.",
"Note that we have omitted the orthogonal regularizer term, since this dataset is relatively small compared to the traditional datasets used for training such a model. We did not observe any significant performance gain while using the regularizer term in our experiments.",
"Benchmarks ::: ELMo embedding and Bi-directional RNN (ELMo + BiRNN)",
"Deep Contextualized Word Representations (ELMo) BIBREF22 have shown recent success in a number of NLP tasks. The unsupervised nature of the language model allows it to utilize a large amount of available unlabelled data in order to learn better representations of words.",
"We used the pre-trained ELMo model (v2) available on Tensorhub for this benchmark. We fed the word embeddings of ELMo as input into a one layer Bi-directional RNN (16 units) with GRU cells (with dropout) and a dense layer. Cross-entropy was used as the cost function.",
"Benchmarks ::: Fine-tuned BERT",
"Bidirectional Encoder Representations from Transformers (BERT) BIBREF11 has achieved state-of-the-art results on several NLP tasks, including sentence classification.",
"We used the fine-tuning procedure outlined in the original work to adapt the pre-trained uncased BERT$_\\textrm {{\\scriptsize LARGE}}$ to a multi-class passage classification task. This technique achieved the best result among our benchmarks, with an average micro-F1 score of 60.4%."
],
"highlighted_evidence": [
"We computed bag-of-words-based benchmarks using the following methods:\n\nClassification with TF-IDF + Linear SVM (TF-IDF + SVM)\n\nClassification with Depeche++ Emotion lexicons BIBREF12 + Linear SVM (Depeche + SVM)\n\nClassification with NRC Emotion lexicons BIBREF13, BIBREF14 + Linear SVM (NRC + SVM)\n\nCombination of TF-IDF and NRC Emotion lexicons (TF-NRC + SVM)\n\nBenchmarks ::: Doc2Vec + SVM\nWe also used simple classification models with learned embeddings. We trained a Doc2Vec model BIBREF15 using the dataset and used the embedding document vectors as features for a linear SVM classifier.\n\nBenchmarks ::: Hierarchical RNN\nFor this benchmark, we considered a Hierarchical RNN, following BIBREF16. We used two BiLSTMs BIBREF17 with 256 units each to model sentences and documents. The tokens of a sentence were processed independently of other sentence tokens. For each direction in the token-level BiLSTM, the last outputs were concatenated and fed into the sentence-level BiLSTM as inputs.\n\nThe outputs of the BiLSTM were connected to 2 dense layers with 256 ReLU units and a Softmax layer. We initialized tokens with publicly available embeddings trained with GloVe BIBREF18. Sentence boundaries were provided by SpaCy. Dropout was applied to the dense hidden layers during training.\n\nBenchmarks ::: Bi-directional RNN and Self-Attention (BiRNN + Self-Attention)\nOne challenge with RNN-based solutions for text classification is finding the best way to combine word-level representations into higher-level representations.\n\nSelf-attention BIBREF19, BIBREF20, BIBREF21 has been adapted to text classification, providing improved interpretability and performance. We used BIBREF20 as the basis of this benchmark.\n\nThe benchmark used a layered Bi-directional RNN (60 units) with GRU cells and a dense layer. Both self-attention layers were 60 units in size and cross-entropy was used as the cost function.\n\nNote that we have omitted the orthogonal regularizer term, since this dataset is relatively small compared to the traditional datasets used for training such a model. We did not observe any significant performance gain while using the regularizer term in our experiments.\n\nBenchmarks ::: ELMo embedding and Bi-directional RNN (ELMo + BiRNN)\nDeep Contextualized Word Representations (ELMo) BIBREF22 have shown recent success in a number of NLP tasks. The unsupervised nature of the language model allows it to utilize a large amount of available unlabelled data in order to learn better representations of words.\n\nWe used the pre-trained ELMo model (v2) available on Tensorhub for this benchmark. We fed the word embeddings of ELMo as input into a one layer Bi-directional RNN (16 units) with GRU cells (with dropout) and a dense layer. Cross-entropy was used as the cost function.\n\nBenchmarks ::: Fine-tuned BERT\nBidirectional Encoder Representations from Transformers (BERT) BIBREF11 has achieved state-of-the-art results on several NLP tasks, including sentence classification.\n\nWe used the fine-tuning procedure outlined in the original work to adapt the pre-trained uncased BERT$_\\textrm {{\\scriptsize LARGE}}$ to a multi-class passage classification task. This technique achieved the best result among our benchmarks, with an average micro-F1 score of 60.4%."
]
}
],
"annotation_id": [
"ea3a6a6941f3f9c06074abbb4da37590578ff09c"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The dataset contains a total of 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words."
],
"highlighted_evidence": [
"The dataset contains a total of 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words."
]
}
],
"annotation_id": [
"8789ec900d3da8e32409fff8df9c4bba5f18520e"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"3 "
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We required all annotators have a `master' MTurk qualification. Each passage was labelled by 3 unique annotators. Only passages with a majority agreement between annotators were accepted as valid. This is equivalent to a Fleiss's $\\kappa $ score of greater than $0.4$."
],
"highlighted_evidence": [
" Each passage was labelled by 3 unique annotators."
]
}
],
"annotation_id": [
"1a8a6f5247e266cb460d5555b64674b590003ec2"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
} | {
"caption": [
"Figure 1: Plutchik’s wheel of emotions (Wikimedia, 2011)",
"Table 1: Genre distribution of the modern narratives",
"Table 4: Benchmark results (averaged 5-fold cross validation)",
"Table 2: Dataset label distribution"
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"4-Table4-1.png",
"4-Table2-1.png"
]
} |
1702.06378 | Multitask Learning with CTC and Segmental CRF for Speech Recognition | Segmental conditional random fields (SCRFs) and connectionist temporal classification (CTC) are two sequence labeling methods used for end-to-end training of speech recognition models. Both models define a transcription probability by marginalizing decisions about latent segmentation alternatives to derive a sequence probability: the former uses a globally normalized joint model of segment labels and durations, and the latter classifies each frame as either an output symbol or a"continuation"of the previous label. In this paper, we train a recognition model by optimizing an interpolation between the SCRF and CTC losses, where the same recurrent neural network (RNN) encoder is used for feature extraction for both outputs. We find that this multitask objective improves recognition accuracy when decoding with either the SCRF or CTC models. Additionally, we show that CTC can also be used to pretrain the RNN encoder, which improves the convergence rate when learning the joint model. | {
"section_name": [
"Introduction",
"Segmental Conditional Random Fields",
"Feature Function and Acoustic Embedding",
"Loss Function",
"Connectionist Temporal Classification ",
"Joint Training Loss",
"Experiments",
"Baseline Results",
"Multitask Learning Results",
"Conclusion",
"Acknowledgements"
],
"paragraphs": [
[
"State-of-the-art speech recognition accuracy has significantly improved over the past few years since the application of deep neural networks BIBREF0 , BIBREF1 . Recently, it has been shown that with the application of both neural network acoustic model and language model, an automatic speech recognizer can approach human-level accuracy on the Switchboard conversational speech recognition benchmark using around 2,000 hours of transcribed data BIBREF2 . While progress is mainly driven by well engineered neural network architectures and a large amount of training data, the hidden Markov model (HMM) that has been the backbone for speech recognition for decades is still playing a central role. Though tremendously successful for the problem of speech recognition, the HMM-based pipeline factorizes the whole system into several components, and building these components separately may be less computationally efficient when developing a large-scale system from thousands to hundred of thousands of examples BIBREF3 .",
"Recently, along with hybrid HMM/NN frameworks for speech recognition, there has been increasing interest in end-to-end training approaches. The key idea is to directly map the input acoustic frames to output characters or words without the intermediate alignment to context-dependent phones used by HMMs. In particular, three architectures have been proposed for the goal of end-to-end learning: connectionist temporal classification (CTC) BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , sequence-to-sequence with attention model BIBREF8 , BIBREF9 , BIBREF10 , and neural network segmental conditional random field (SCRF) BIBREF11 , BIBREF12 . These end-to-end models simplify the pipeline of speech recognition significantly. They do not require intermediate alignment or segmentation like HMMs, instead, the alignment or segmentation is marginalized out during training for CTC and SCRF or inferred by the attention mechanism. In terms of the recognition accuracy, however, the end-to-end models usually lag behind their HMM-based counterparts. Though CTC has been shown to outperform HMM systems BIBREF13 , the improvement is based on the use of context-dependent phone targets and a very large amount of training data. Therefore, it has almost the same system complexity as HMM acoustic models. When the training data is less abundant, it has been shown that the accuracy of CTC systems degrades significantly BIBREF14 .",
"However, end-to-end models have the flexibility to be combined to mitigate their individual weaknesses. For instance, multitask learning with attention models has been investigated for machine translation BIBREF15 , and Mandarin speech recognition using joint Character-Pinyin training BIBREF16 . In BIBREF17 , Kim et al. proposed a multitask learning approach to train a joint attention model and a CTC model using a shared encoder. They showed that the CTC auxiliary task can help the attention model to overcome the misalignment problem in the initial few epochs, and speed up the convergence of the attention model. Another nice property of the multitask learning approach is that the joint model can still be trained end-to-end. Inspired by this work, we study end-to-end training of a joint CTC and SCRF model using an interpolated loss function. The key difference of our study from BIBREF17 is that the two loss functions of the CTC and attention models are locally normalized for each output token, and they are both trained using the cross entropy criterion. However, the SCRF loss function is normalized at the sequence-level, which is similar to the sequence discriminative training objective function for HMMs. From this perspective, the interpolation of CTC and SCRF loss functions is analogous to the sequence discriminative training of HMMs with CE regularization to overcome overfitting, where a sequence-level loss is also interpolated with a frame-level loss, e.g., BIBREF18 . Similar to the observations in BIBREF17 , we demonstrate that the joint training approach improves the recognition accuracies of both CTC and SCRF acoustic models. Further, we also show that CTC can be used to pretrain the neural network feature extractor to speed up the convergence of the joint model. Experiments were performed on the TIMIT database."
],
[
"SCRF is a variant of the linear-chain CRF model where each output token corresponds to a segment of input tokens instead of a single input instance. In the context of speech recognition, given a sequence of input vectors of $T$ frames ${X} = ( {x}_1, \\cdots , {x}_T )$ and its corresponding sequence of output labels ${y} = ( y_1, \\cdots , y_J)$ , the zero-order linear-chain CRF defines the sequence-level conditional probability as P(y X) = 1Z(X) t=1T f ( yt, xt ), where $Z({X})$ denotes the normalization term, and $T=J$ . Extension to higher order models is straightforward, but it is usually computationally much more expensive. The model defined in Eq. ( \"Segmental Conditional Random Fields\" ) requires the length of ${X}$ and ${y}$ to be equal, which makes it inappropriate for speech recognition because the lengths of the input and output sequences are not equal. For the case where $T\\ge J$ as in speech recognition, SCRF defines the sequence-level conditional probability with the auxiliary segment labels ${E} = ({e}_1, \\cdots , {e}_J) $ as P(y, E X) = 1Z(X) j=1J f ( yj, ej, xj ), where $\\mathbf {e}_j = \\langle s_{j}, n_{j} \\rangle $ is a tuple of the beginning ( ${X} = ( {x}_1, \\cdots , {x}_T )$0 ) and the end ( ${X} = ( {x}_1, \\cdots , {x}_T )$1 ) time tag for the segment of ${X} = ( {x}_1, \\cdots , {x}_T )$2 , and ${X} = ( {x}_1, \\cdots , {x}_T )$3 while ${X} = ( {x}_1, \\cdots , {x}_T )$4 ; ${X} = ( {x}_1, \\cdots , {x}_T )$5 and ${X} = ( {x}_1, \\cdots , {x}_T )$6 denotes the vocabulary set; ${X} = ( {x}_1, \\cdots , {x}_T )$7 is the embedding vector of the segment corresponding to the token ${X} = ( {x}_1, \\cdots , {x}_T )$8 . In this case, ${X} = ( {x}_1, \\cdots , {x}_T )$9 sums over all the possible ${y} = ( y_1, \\cdots , y_J)$0 pairs, i.e., ",
"$$Z({X}) = \\sum _{y,E} \\prod _{j=1}^J \\exp f \\left( y_j, {e}_j, \\bar{x}_j \\right).$$ (Eq. 1) ",
"Similar to other CRFs, the function $f(\\cdot )$ is defined as ",
"$$f \\left( y_j, {e}_j, \\bar{x}_t \\right) = \\mathbf {w}^\\top \\Phi (y_j, {e}_j, \\bar{x}_j),$$ (Eq. 2) ",
"where $\\Phi (\\cdot )$ denotes the feature function, and $\\mathbf {w}$ is the weight vector. Most of conventional approaches for SCRF-based acoustic models use a manually defined feature function $\\Phi (\\cdot )$ , where the features and segment boundary information are provided by an auxiliary system BIBREF19 , BIBREF20 . In BIBREF21 , BIBREF12 , we proposed an end-to-end training approach for SCRFs, where $\\Phi (\\cdot )$ was defined with neural networks, and the segmental level features were learned by RNNs. The model was referred to as the segmental RNN (SRNN), and it will be used as the implementation of the SCRF acoustic model for multitask learning in this study."
],
[
"SRNN uses an RNN to learn segmental level acoustic embeddings. Given the input sequence ${X} = ({x}_1, \\cdots , {x}_T)$ , and we need to compute the embedding vector $\\bar{x}_j$ in Eq. ( 2 ) corresponding to the segment ${e}_j = \\langle s_j, n_j\\rangle $ . Since the segment boundaries are known, it is straightforward to employ an RNN to map the segment into a vector as [ l hsj",
"hsj+1",
" $\\vdots $ ",
"hnj ] = [ l RNN(h0, xsj)",
"RNN(hsj, xsj+1)",
" $\\vdots $ ",
"RNN(hnj-1, xnj) ] where ${h}_0$ denotes the initial hidden state, which is initialized to be zero. RNN( $\\cdot $ ) denotes the nonlinear recurrence operation used in an RNN, which takes the previous hidden state and the feature vector at the current timestep as inputs, and produce an updated hidden state vector. Given the recurrent hidden states, the embedding vector can be simply defined as $\\bar{x}_j= {h}_{n_j}$ as in our previous work BIBREF12 . However, the drawback of this implementation is the large memory cost, as we need to store the array of hidden states $({h}_{s_j}, \\cdots , {h}_{n_j})$ for all the possible segments $\\langle s_j, n_j\\rangle $ . If we denote $H$ as the dimension of an RNN hidden state, the memory cost will be on the order of $O(T^2H)$ , where $T$ is the length of $X$ . It is especially problematic for the joint model as the CTC model requires additional memory space. In this work, we adopt another approach that requires much less memory. In this approach, we use an RNN to read the whole input sequence as [ c h1",
"h2",
" $\\vdots $ ",
"hT ] = [ l RNN(h0, x1)",
"RNN(h1, x2)",
" $\\vdots $ ",
"RNN(hT-1, xT) ] and we define the embedding vector for segment ${e} = \\langle k, t\\rangle $ as xj = [ c hsj",
"hnj ] In this case, we only provide the context information for the feature function $\\Phi (\\cdot )$ to extract segmental features. We refer this approach as context-aware embedding. Since we only need to read the input sequence once, the memory requirement is on the order of $O(TH)$ , which is much smaller. The cost, however, is the slightly degradation of the recognition accuracy. This model is illustrated by Figure 1 .",
"The feature function $\\Phi (\\cdot )$ also requires a vector representation of the label $y_j$ . This embedding vector can be obtained using a linear embedding matrix, following common practice for RNN language models. More specifically, $y_j$ is first represented as a one-hot vector ${v}_j$ , and it is then mapped into a continuous space by a linear embedding matrix ${M}$ as ",
"$${u}_j = {M v}_j$$ (Eq. 4) ",
"Given the acoustic embedding $\\bar{x}_j$ and label embedding $u_j$ , the feature function $\\Phi (\\cdot )$ can be represented as (yj, ej, xj) = (W1uj + W2xj + b), where $\\sigma $ denotes a non-linear activation function (e.g., sigmoid or tanh); $W_1, W_2$ and $b$ are weight matrices and a bias vector. Eq. ( \"Connectionist Temporal Classification \" ) corresponds to one layer of non-linear transformation. In fact, it is straightforward to stack multiple nonlinear layers in this feature function."
],
[
"For speech recognition, the segmentation labels ${E}$ are usually unknown in the training set. In this case, we cannot train the model directly by maximizing the conditional probability in Eq. ( \"Segmental Conditional Random Fields\" ). However, the problem can be addressed by marginalizing out the segmentation variable as Lscrf = - P(y X)",
"= - E P(y, E X)",
"= - E j f ( yj, ej, xj ) Z(X, y) + Z(X), where $Z({X}, {y})$ denotes the summation over all the possible segmentations when only ${y}$ is observed. To simplify notation, the objective function $\\mathcal {L}_{\\mathit {scrf}}$ is defined here with only one training utterance.",
"However, the number of possible segmentations is exponential in the length of ${X}$ , which makes the naïve computation of both $Z({X}, {y})$ and $Z({X})$ impractical. To address this problem, a dynamic programming algorithm can be applied, which can reduce the computational complexity to $O(T^2\\cdot |\\mathcal {Y}|)$ BIBREF22 . The computational cost can be further reduced by limiting the maximum length of all the possible segments. The reader is referred to BIBREF12 for further details including the decoding algorithm."
],
[
"CTC also directly computes the conditional probability $P(y \\mid X)$ , with the key difference from SCRF in that it normalizes the probabilistic distribution at the frame level. To address the problem of length mismatch between the input and output sequences, CTC allows repetitions of output labels and introduces a special blank token ( $-$ ), which represents the probability of not emitting any label at a particular time step. The conditional probability is then obtained by summing over all the probabilities of all the paths that corresponding to $y$ after merging the repeated labels and removing the blank tokens, i.e., P(y X) = (y) P(X), where $\\Psi (y)$ denotes the set of all possible paths that correspond to $y$ after repetitions of labels and insertions of the blank token. Now the length of $\\pi $ is the same as $X$ , the probability $P(\\pi \\mid X)$ is then approximated by the independence assumption as P(X) t=1T P(t xt), where $\\pi _t $ ranges over $\\mathcal {Y}\\cup \\lbrace -\\rbrace $ , and $-$0 can be computed using the softmax function. The training criterion for CTC is to maximize the conditional probability of the ground truth labels, which is equivalent to minimizing the negative log likelihood: Lctc = -P(y X), which can be reformulated as the CE criterion. More details regarding the computation of the loss and the backpropagation algorithm to train CTC models can be found in BIBREF23 ."
],
[
"Training the two models jointly is trivial. We can simply interpolate the CTC and SCRF loss functions as L = Lctc + (1-)Lscrf, where $\\lambda \\in [0, 1]$ is the interpolation weight. The two models share the same neural network for feature extraction. In this work, we focus on the RNN with long short-term memory (LSTM) BIBREF24 units for feature extraction. Other types of neural architecture, e.g., convolutional neural network (CNN) or combinations of CNN and RNN, may be considered in future work."
],
[
"Our experiments were performed on the TIMIT database, and both the SRNN and CTC models were implemented using the DyNet toolkit BIBREF25 . We followed the standard protocol of the TIMIT dataset, and our experiments were based on the Kaldi recipe BIBREF26 . We used the core test set as our evaluation set, which has 192 utterances. Our models were trained with 48 phonemes, and their predictions were converted to 39 phonemes before scoring. The dimension of $\\mathbf {u}_j$ was fixed to be 64, and the dimension of $\\mathbf {w}$ in Eq. ( 2 ) is also 64. We set the initial SGD learning rate to be 0.1, and we exponentially decay the learning rate by 0.75 when the validation error stopped decreasing. We also subsampled the acoustic sequence by a factor of 4 using the hierarchical RNN as in BIBREF12 . Our models were trained with dropout regularization BIBREF27 , using a specific implementation for recurrent networks BIBREF28 . The dropout rate was 0.2 unless specified otherwise. Our models were randomly initialized with the same random seed."
],
[
"Table 1 shows the baseline results of SRNN and CTC models using two different kinds of features. The FBANK features are 120-dimensional with delta and delta-delta coefficients, and the fMLLR features are 40-dimensional, which were obtained from a Kaldi baseline system. We used a 3-layer bidirectional LSTMs for feature extraction, and we used the greedy best path decoding algorithm for both models. Our SRNN and CTC achieved comparable phone error rate (PER) for both kinds of features. However, for the CTC system, Graves et al. BIBREF29 obtained a better result, using about the same size of neural network (3 hidden layers with 250 hidden units of bidirectional LSTMs), compared to ours (18.6% vs. 19.9%). Apart from the implementation difference of using different code bases, Graves et al. BIBREF29 applied the prefix decoding with beam search, which may have lower search error than our best path decoding algorithm."
],
[
"Table 2 shows results of multitask learning for CTC and SRNN using the interpolated loss in Eq. ( \"Joint Training Loss\" ). We only show results of using LSTMs with 250 dimensional hidden states. The interpolation weight was set to be 0.5. In our experiments, tuning the interpolation weight did not further improve the recognition accuracy. From Table 2 , we can see that multitask learning improves recognition accuracies of both SRNN and CTC acoustic models, which may due to the regularization effect of the joint training loss. The improvement for FBANK features is much larger than fMLLR features. In particular, with multitask learning, the recognition accuracy of our CTC system with best path decoding is comparable to the results obtained by Graves et al. BIBREF29 with beam search decoding.",
"One of the major drawbacks of SCRF models is their high computational cost. In our experiments, the CTC model is around 3–4 times faster than the SRNN model that uses the same RNN encoder. The joint model by multitask learning is slightly more expensive than the stand-alone SRNN model. To cut down the computational cost, we investigated if CTC can be used to pretrain the RNN encoder to speed up the training of the joint model. This is analogous to sequence training of HMM acoustic models, where the network is usually pretrained by the frame-level CE criterion. Figure 2 shows the convergence curves of the joint model with and without CTC pretraining, and we see pretraining indeed improves the convergence speed of the joint model."
],
[
"We investigated multitask learning with CTC and SCRF for speech recognition in this paper. Using an RNN encoder for feature extraction, both CTC and SCRF can be trained end-to-end, and the two models can be trained together by interpolating the two loss functions. From experiments on the TIMIT dataset, the multitask learning approach improved the recognition accuracies of both CTC and SCRF acoustic models. We also showed that CTC can be used to pretrain the RNN encoder, speeding up the training of the joint model. In the future, we will study the multitask learning approach for larger-scale speech recognition tasks, where the CTC pretraining approach may be more helpful to overcome the problem of high computational cost."
],
[
"We thank the NVIDIA Corporation for the donation of a Titan X GPU."
]
]
} | {
"question": [
"Can SCRF be used to pretrain the model?"
],
"question_id": [
"aecb485ea7d501094e50ad022ade4f0c93088d80"
],
"nlp_background": [
""
],
"topic_background": [
"familiar"
],
"paper_read": [
"no"
],
"search_query": [
"pretrain"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"One of the major drawbacks of SCRF models is their high computational cost. In our experiments, the CTC model is around 3–4 times faster than the SRNN model that uses the same RNN encoder. The joint model by multitask learning is slightly more expensive than the stand-alone SRNN model. To cut down the computational cost, we investigated if CTC can be used to pretrain the RNN encoder to speed up the training of the joint model. This is analogous to sequence training of HMM acoustic models, where the network is usually pretrained by the frame-level CE criterion. Figure 2 shows the convergence curves of the joint model with and without CTC pretraining, and we see pretraining indeed improves the convergence speed of the joint model."
],
"highlighted_evidence": [
"One of the major drawbacks of SCRF models is their high computational cost. In our experiments, the CTC model is around 3–4 times faster than the SRNN model that uses the same RNN encoder.",
"To cut down the computational cost, we investigated if CTC can be used to pretrain the RNN encoder to speed up the training of the joint model.",
"Figure 2 shows the convergence curves of the joint model with and without CTC pretraining, and we see pretraining indeed improves the convergence speed of the joint model."
]
}
],
"annotation_id": [
"009ab38492f25d372a9da73d1ab1ed781f49082c"
],
"worker_id": [
"043654eefd60242ac8da08ddc1d4b8d73f86f653"
]
}
]
} | {
"caption": [
"Figure 1: A Segmental RNN with the context aware embedding. The acoustic segmental embedding vector is composed by the hidden states from the RNN encoder corresponding to the beginning and end time tags.",
"Table 1: Phone error rates of baseline CTC and SRNN models.",
"Figure 2: Convergence curves with and without CTC pretraining in multi-task learning framework.",
"Table 2: Results of three types of acoustic features."
],
"file": [
"3-Figure1-1.png",
"3-Table1-1.png",
"4-Figure2-1.png",
"4-Table2-1.png"
]
} |
1903.03467 | Filling Gender&Number Gaps in Neural Machine Translation with Black-box Context Injection | When translating from a language that does not morphologically mark information such as gender and number into a language that does, translation systems must"guess"this missing information, often leading to incorrect translations in the given context. We propose a black-box approach for injecting the missing information to a pre-trained neural machine translation system, allowing to control the morphological variations in the generated translations without changing the underlying model or training data. We evaluate our method on an English to Hebrew translation task, and show that it is effective in injecting the gender and number information and that supplying the correct information improves the translation accuracy in up to 2.3 BLEU on a female-speaker test set for a state-of-the-art online black-box system. Finally, we perform a fine-grained syntactic analysis of the generated translations that shows the effectiveness of our method. | {
"section_name": [
"Introduction",
"Morphological Ambiguity in Translation",
"Black-Box Knowledge Injection",
"Experiments & Results",
"Quantitative Results",
"Qualitative Results",
"Comparison to vanmassenhove-hardmeier-way:2018:EMNLP",
"Other Languages",
"Related Work",
"Conclusions"
],
"paragraphs": [
[
"A common way for marking information about gender, number, and case in language is morphology, or the structure of a given word in the language. However, different languages mark such information in different ways – for example, in some languages gender may be marked on the head word of a syntactic dependency relation, while in other languages it is marked on the dependent, on both, or on none of them BIBREF0 . This morphological diversity creates a challenge for machine translation, as there are ambiguous cases where more than one correct translation exists for the same source sentence. For example, while the English sentence “I love language” is ambiguous with respect to the gender of the speaker, Hebrew marks verbs for the gender of their subject and does not allow gender-neutral translation. This allows two possible Hebrew translations – one in a masculine and the other in a feminine form. As a consequence, a sentence-level translator (either human or machine) must commit to the gender of the speaker, adding information that is not present in the source. Without additional context, this choice must be done arbitrarily by relying on language conventions, world knowledge or statistical (stereotypical) knowledge.",
"Indeed, the English sentence “I work as a doctor” is translated into Hebrew by Google Translate using the masculine verb form oved, indicating a male speaker, while “I work as a nurse” is translated with the feminine form ovedet, indicating a female speaker (verified on March 2019). While this is still an issue, there have been recent efforts to reduce it for specific language pairs.",
"We present a simple black-box method to influence the interpretation chosen by an NMT system in these ambiguous cases. More concretely, we construct pre-defined textual hints about the gender and number of the speaker and the audience (the interlocutors), which we concatenate to a given input sentence that we would like to translate accordingly. We then show that a black-box NMT system makes the desired morphological decisions according to the given hint, even when no other evidence is available on the source side. While adding those hints results in additional text on the target side, we show that it is simple to remove, leaving only the desired translation.",
"Our method is appealing as it only requires simple pre-and-post processing of the inputs and outputs, without considering the system internals, or requiring specific annotated data and training procedure as in previous work BIBREF1 . We show that in spite of its simplicity, it is effective in resolving many of the ambiguities and improves the translation quality in up to 2.3 BLEU when given the correct hints, which may be inferred from text metadata or other sources. Finally, we perform a fine-grained syntactic analysis of the translations generated using our method which shows its effectiveness."
],
[
"Different languages use different morphological features marking different properties on different elements. For example, English marks for number, case, aspect, tense, person, and degree of comparison. However, English does not mark gender on nouns and verbs. Even when a certain property is marked, languages differ in the form and location of the marking BIBREF0 . For example, marking can occur on the head of a syntactic dependency construction, on its argument, on both (requiring agreement), or on none of them. Translation systems must generate correct target-language morphology as part of the translation process. This requires knowledge of both the source-side and target-side morphology. Current state-of-the-art translation systems do capture many aspects of natural language, including morphology, when a relevant context is available BIBREF2 , BIBREF3 , but resort to “guessing” based on the training-data statistics when it is not. Complications arise when different languages convey different kinds of information in their morphological systems. In such cases, a translation system may be required to remove information available in the source sentence, or to add information not available in it, where the latter can be especially tricky."
],
[
"Our goal is to supply an NMT system with knowledge regarding the speaker and interlocutor of first-person sentences, in order to produce the desired target-side morphology when the information is not available in the source sentence. The approach we take in the current work is that of black-box injection, in which we attempt to inject knowledge to the input in order to influence the output of a trained NMT system, without having access to its internals or its training procedure as proposed by vanmassenhove-hardmeier-way:2018:EMNLP.",
"We are motivated by recent work by BIBREF4 who showed that NMT systems learn to track coreference chains when presented with sufficient discourse context. We conjecture that there are enough sentence-internal pronominal coreference chains appearing in the training data of large-scale NMT systems, such that state-of-the-art NMT systems can and do track sentence-internal coreference. We devise a wrapper method to make use of this coreference tracking ability by introducing artificial antecedents that unambiguously convey the desired gender and number properties of the speaker and audience.",
"More concretely, a sentence such as “I love you” is ambiguous with respect to the gender of the speaker and the gender and number of the audience. However, sentences such as “I love you, she told him” are unambiguous given the coreference groups {I, she} and {you, him} which determine I to be feminine singular and you to be masculine singular. We can thus inject the desired information by prefixing a sentence with short generic sentence fragment such as “She told him:” or “She told them that”, relying on the NMT system's coreference tracking abilities to trigger the correctly marked translation, and then remove the redundant translated prefix from the generated target sentence. We observed that using a parataxis construction (i.e. “she said to him:”) almost exclusively results in target-side parataxis as well (in 99.8% of our examples), making it easy to identify and strip the translated version from the target side. Moreover, because the parataxis construction is grammatically isolated from the rest of the sentence, it can be stripped without requiring additional changes or modification to the rest of the sentence, ensuring grammaticality."
],
[
"To demonstrate our method in a black-box setting, we focus our experiments on Google's machine translation system (GMT), accessed through its Cloud API. To test the method on real-world sentences, we consider a monologue from the stand-up comedy show “Sarah Silverman: A Speck of Dust”. The monologue consists of 1,244 English sentences, all by a female speaker conveyed to a plural, gender-neutral audience. Our parallel corpora consists of the 1,244 English sentences from the transcript, and their corresponding Hebrew translations based on the Hebrew subtitles. We translate the monologue one sentence at a time through the Google Cloud API. Eyeballing the results suggest that most of the translations use the incorrect, but default, masculine and singular forms for the speaker and the audience, respectively. We expect that by adding the relevant condition of “female speaking to an audience” we will get better translations, affecting both the gender of the speaker and the number of the audience.",
"To verify this, we experiment with translating the sentences with the following variations: No Prefix—The baseline translation as returned by the GMT system. “He said:”—Signaling a male speaker. We expect to further skew the system towards masculine forms. “She said:”—Signaling a female speaker and unknown audience. As this matches the actual speaker's gender, we expect an improvement in translation of first-person pronouns and verbs with first-person pronouns as subjects. “I said to them:”—Signaling an unknown speaker and plural audience. “He said to them:”—Masculine speaker and plural audience. “She said to them:”—Female speaker and plural audience—the complete, correct condition. We expect the best translation accuracy on this setup. “He/she said to him/her”—Here we set an (incorrect) singular gender-marked audience, to investigate our ability to control the audience morphology."
],
[
"We compare the different conditions by comparing BLEU BIBREF5 with respect to the reference Hebrew translations. We use the multi-bleu.perl script from the Moses toolkit BIBREF6 . Table shows BLEU scores for the different prefixes. The numbers match our expectations: Generally, providing an incorrect speaker and/or audience information decreases the BLEU scores, while providing the correct information substantially improves it - we see an increase of up to 2.3 BLEU over the baseline. We note the BLEU score improves in all cases, even when given the wrong gender of either the speaker or the audience. We hypothesise this improvement stems from the addition of the word “said” which hints the model to generate a more “spoken” language which matches the tested scenario. Providing correct information for both speaker and audience usually helps more than providing correct information to either one of them individually. The one outlier is providing “She” for the speaker and “her” for the audience. While this is not the correct scenario, we hypothesise it gives an improvement in BLEU as it further reinforces the female gender in the sentence."
],
[
"The BLEU score is an indication of how close the automated translation is to the reference translation, but does not tell us what exactly changed concerning the gender and number properties we attempt to control. We perform a finer-grained analysis focusing on the relation between the injected speaker and audience information, and the morphological realizations of the corresponding elements. We parse the translations and the references using a Hebrew dependency parser. In addition to the parse structure, the parser also performs morphological analysis and tagging of the individual tokens. We then perform the following analysis.",
"Speaker's Gender Effects: We search for first-person singular pronouns with subject case (ani, unmarked for gender, corresponding to the English I), and consider the gender of its governing verb (or adjectives in copular constructions such as `I am nice'). The possible genders are `masculine', `feminine' and `both', where the latter indicates a case where the none-diacriticized written form admits both a masculine and a feminine reading. We expect the gender to match the ones requested in the prefix.",
"Interlocutors' Gender and Number Effects: We search for second-person pronouns and consider their gender and number. For pronouns in subject position, we also consider the gender and number of their governing verbs (or adjectives in copular constructions). For a singular audience, we expect the gender and number to match the requested ones. For a plural audience, we expect the masculine-plural forms.",
"Results: Speaker. Figure FIGREF3 shows the result for controlling the morphological properties of the speaker ({he, she, I} said). It shows the proportion of gender-inflected verbs for the various conditions and the reference. We see that the baseline system severely under-predicts the feminine form of verbs as compared to the reference. The “He said” conditions further decreases the number of feminine verbs, while the “I said” conditions bring it back to the baseline level. Finally, the “She said” prefixes substantially increase the number of feminine-marked verbs, bringing the proportion much closer to that of the reference (though still under-predicting some of the feminine cases).",
"Results: Audience. The chart in Figure FIGREF3 shows the results for controlling the number of the audience (...to them vs nothing). It shows the proportion of singular vs. plural second-person pronouns on the various conditions. It shows a similar trend: the baseline system severely under-predicts the plural forms with respect to the reference translation, while adding the “to them” condition brings the proportion much closer to that of the reference."
],
[
"Closely related to our work, vanmassenhove-hardmeier-way:2018:EMNLP proposed a method and an English-French test set to evaluate gender-aware translation, based on the Europarl corpus BIBREF7 . We evaluate our method (using Google Translate and the given prefixes) on their test set to see whether it is applicable to another language pair and domain. Table shows the results of our approach vs. their published results and the Google Translate baseline. As may be expected, Google Translate outperforms their system as it is trained on a different corpus and may use more complex machine translation models. Using our method improves the BLEU score even further."
],
[
"To test our method’s outputs on multiple languages, we run our pre-and post-processing steps with Google Translate using examples we sourced from native speakers of different languages. For every example we have an English sentence and two translations in the corresponding language, one in masculine and one in feminine form. Not all examples are using the same source English sentence as different languages mark different information. Table shows that for these specific examples our method worked on INLINEFORM0 of the languages we had examples for, while for INLINEFORM1 languages both translations are masculine, and for 1 language both are feminine."
],
[
"E17-1101 showed that given input with author traits like gender, it is possible to retain those traits in Statistical Machine Translation (SMT) models. W17-4727 showed that incorporating morphological analysis in the decoder improves NMT performance for morphologically rich languages. burlot:hal-01618387 presented a new protocol for evaluating the morphological competence of MT systems, indicating that current translation systems only manage to capture some morphological phenomena correctly. Regarding the application of constraints in NMT, N16-1005 presented a method for controlling the politeness level in the generated output. DBLP:journals/corr/FiclerG17aa showed how to guide a neural text generation system towards style and content parameters like the level of professionalism, subjective/objective, sentiment and others. W17-4811 showed that incorporating more context when translating subtitles can improve the coherence of the generated translations. Most closely to our work, vanmassenhove-hardmeier-way:2018:EMNLP also addressed the missing gender information by training proprietary models with a gender-indicating-prefix. We differ from this work by treating the problem in a black-box manner, and by addressing additional information like the number of the speaker and the gender and number of the audience."
],
[
"We highlight the problem of translating between languages with different morphological systems, in which the target translation must contain gender and number information that is not available in the source. We propose a method for injecting such information into a pre-trained NMT model in a black-box setting. We demonstrate the effectiveness of this method by showing an improvement of 2.3 BLEU in an English-to-Hebrew translation setting where the speaker and audience gender can be inferred. We also perform a fine-grained syntactic analysis that shows how our method enables to control the morphological realization of first and second-person pronouns, together with verbs and adjectives related to them. In future work we would like to explore automatic generation of the injected context, or the use of cross-sentence context to infer the injected information."
]
]
} | {
"question": [
"What conclusions are drawn from the syntactic analysis?",
"What type of syntactic analysis is performed?",
"How is it demonstrated that the correct gender and number information is injected using this system?",
"Which neural machine translation system is used?",
"What are the components of the black-box context injection system?"
],
"question_id": [
"2fea3c955ff78220b2c31a8ad1322bc77f6706f8",
"faa4f28a2f2968cecb770d9379ab2cfcaaf5cfab",
"da068b20988883bc324e55c073fb9c1a5c39be33",
"0d6d5b6c00551dd0d2519f117ea81d1e9e8785ec",
"edcde2b675cf8a362a63940b2bbdf02c150fe01f"
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"topic_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"search_query": [
"",
"",
"",
"",
""
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" our method enables to control the morphological realization of first and second-person pronouns, together with verbs and adjectives related to them"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We highlight the problem of translating between languages with different morphological systems, in which the target translation must contain gender and number information that is not available in the source. We propose a method for injecting such information into a pre-trained NMT model in a black-box setting. We demonstrate the effectiveness of this method by showing an improvement of 2.3 BLEU in an English-to-Hebrew translation setting where the speaker and audience gender can be inferred. We also perform a fine-grained syntactic analysis that shows how our method enables to control the morphological realization of first and second-person pronouns, together with verbs and adjectives related to them. In future work we would like to explore automatic generation of the injected context, or the use of cross-sentence context to infer the injected information."
],
"highlighted_evidence": [
"We also perform a fine-grained syntactic analysis that shows how our method enables to control the morphological realization of first and second-person pronouns, together with verbs and adjectives related to them."
]
}
],
"annotation_id": [
"aedd66544adf0cf55c12d2d0b2e105e3ad46c08a"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Speaker's Gender Effects",
"Interlocutors' Gender and Number Effects"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The BLEU score is an indication of how close the automated translation is to the reference translation, but does not tell us what exactly changed concerning the gender and number properties we attempt to control. We perform a finer-grained analysis focusing on the relation between the injected speaker and audience information, and the morphological realizations of the corresponding elements. We parse the translations and the references using a Hebrew dependency parser. In addition to the parse structure, the parser also performs morphological analysis and tagging of the individual tokens. We then perform the following analysis.",
"Speaker's Gender Effects: We search for first-person singular pronouns with subject case (ani, unmarked for gender, corresponding to the English I), and consider the gender of its governing verb (or adjectives in copular constructions such as `I am nice'). The possible genders are `masculine', `feminine' and `both', where the latter indicates a case where the none-diacriticized written form admits both a masculine and a feminine reading. We expect the gender to match the ones requested in the prefix.",
"Interlocutors' Gender and Number Effects: We search for second-person pronouns and consider their gender and number. For pronouns in subject position, we also consider the gender and number of their governing verbs (or adjectives in copular constructions). For a singular audience, we expect the gender and number to match the requested ones. For a plural audience, we expect the masculine-plural forms."
],
"highlighted_evidence": [
"We then perform the following analysis.\n\nSpeaker's Gender Effects: We search for first-person singular pronouns with subject case (ani, unmarked for gender, corresponding to the English I), and consider the gender of its governing verb (or adjectives in copular constructions such as `I am nice'). The possible genders are `masculine', `feminine' and `both', where the latter indicates a case where the none-diacriticized written form admits both a masculine and a feminine reading. We expect the gender to match the ones requested in the prefix.\n\nInterlocutors' Gender and Number Effects: We search for second-person pronouns and consider their gender and number. For pronouns in subject position, we also consider the gender and number of their governing verbs (or adjectives in copular constructions). For a singular audience, we expect the gender and number to match the requested ones. For a plural audience, we expect the masculine-plural forms."
]
}
],
"annotation_id": [
"a27cea2a0f38708b52f9d9b7a13cdc99244b66f6"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" correct information substantially improves it - we see an increase of up to 2.3 BLEU over the baseline",
"Finally, the “She said” prefixes substantially increase the number of feminine-marked verbs, bringing the proportion much closer to that of the reference"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We compare the different conditions by comparing BLEU BIBREF5 with respect to the reference Hebrew translations. We use the multi-bleu.perl script from the Moses toolkit BIBREF6 . Table shows BLEU scores for the different prefixes. The numbers match our expectations: Generally, providing an incorrect speaker and/or audience information decreases the BLEU scores, while providing the correct information substantially improves it - we see an increase of up to 2.3 BLEU over the baseline. We note the BLEU score improves in all cases, even when given the wrong gender of either the speaker or the audience. We hypothesise this improvement stems from the addition of the word “said” which hints the model to generate a more “spoken” language which matches the tested scenario. Providing correct information for both speaker and audience usually helps more than providing correct information to either one of them individually. The one outlier is providing “She” for the speaker and “her” for the audience. While this is not the correct scenario, we hypothesise it gives an improvement in BLEU as it further reinforces the female gender in the sentence.",
"Results: Speaker. Figure FIGREF3 shows the result for controlling the morphological properties of the speaker ({he, she, I} said). It shows the proportion of gender-inflected verbs for the various conditions and the reference. We see that the baseline system severely under-predicts the feminine form of verbs as compared to the reference. The “He said” conditions further decreases the number of feminine verbs, while the “I said” conditions bring it back to the baseline level. Finally, the “She said” prefixes substantially increase the number of feminine-marked verbs, bringing the proportion much closer to that of the reference (though still under-predicting some of the feminine cases)."
],
"highlighted_evidence": [
" Generally, providing an incorrect speaker and/or audience information decreases the BLEU scores, while providing the correct information substantially improves it - we see an increase of up to 2.3 BLEU over the baseline.",
"We see that the baseline system severely under-predicts the feminine form of verbs as compared to the reference. The “He said” conditions further decreases the number of feminine verbs, while the “I said” conditions bring it back to the baseline level. Finally, the “She said” prefixes substantially increase the number of feminine-marked verbs, bringing the proportion much closer to that of the reference (though still under-predicting some of the feminine cases)."
]
}
],
"annotation_id": [
"8686128a30bedaf90c9fdc17d69b887fd8b509bd"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Google's machine translation system (GMT)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"To demonstrate our method in a black-box setting, we focus our experiments on Google's machine translation system (GMT), accessed through its Cloud API. To test the method on real-world sentences, we consider a monologue from the stand-up comedy show “Sarah Silverman: A Speck of Dust”. The monologue consists of 1,244 English sentences, all by a female speaker conveyed to a plural, gender-neutral audience. Our parallel corpora consists of the 1,244 English sentences from the transcript, and their corresponding Hebrew translations based on the Hebrew subtitles. We translate the monologue one sentence at a time through the Google Cloud API. Eyeballing the results suggest that most of the translations use the incorrect, but default, masculine and singular forms for the speaker and the audience, respectively. We expect that by adding the relevant condition of “female speaking to an audience” we will get better translations, affecting both the gender of the speaker and the number of the audience."
],
"highlighted_evidence": [
"To demonstrate our method in a black-box setting, we focus our experiments on Google's machine translation system (GMT), accessed through its Cloud API."
]
}
],
"annotation_id": [
"7a7f39d12b1d910bc0ae826f3ee2a957141aecf3"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"supply an NMT system with knowledge regarding the speaker and interlocutor of first-person sentences"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Our goal is to supply an NMT system with knowledge regarding the speaker and interlocutor of first-person sentences, in order to produce the desired target-side morphology when the information is not available in the source sentence. The approach we take in the current work is that of black-box injection, in which we attempt to inject knowledge to the input in order to influence the output of a trained NMT system, without having access to its internals or its training procedure as proposed by vanmassenhove-hardmeier-way:2018:EMNLP.",
"To verify this, we experiment with translating the sentences with the following variations: No Prefix—The baseline translation as returned by the GMT system. “He said:”—Signaling a male speaker. We expect to further skew the system towards masculine forms. “She said:”—Signaling a female speaker and unknown audience. As this matches the actual speaker's gender, we expect an improvement in translation of first-person pronouns and verbs with first-person pronouns as subjects. “I said to them:”—Signaling an unknown speaker and plural audience. “He said to them:”—Masculine speaker and plural audience. “She said to them:”—Female speaker and plural audience—the complete, correct condition. We expect the best translation accuracy on this setup. “He/she said to him/her”—Here we set an (incorrect) singular gender-marked audience, to investigate our ability to control the audience morphology."
],
"highlighted_evidence": [
"Our goal is to supply an NMT system with knowledge regarding the speaker and interlocutor of first-person sentences, in order to produce the desired target-side morphology when the information is not available in the source sentence. The approach we take in the current work is that of black-box injection, in which we attempt to inject knowledge to the input in order to influence the output of a trained NMT system, without having access to its internals or its training procedure as proposed by vanmassenhove-hardmeier-way:2018:EMNLP.",
"To verify this, we experiment with translating the sentences with the following variations: No Prefix—The baseline translation as returned by the GMT system. “He said:”—Signaling a male speaker. We expect to further skew the system towards masculine forms. “She said:”—Signaling a female speaker and unknown audience. As this matches the actual speaker's gender, we expect an improvement in translation of first-person pronouns and verbs with first-person pronouns as subjects. “I said to them:”—Signaling an unknown speaker and plural audience. “He said to them:”—Masculine speaker and plural audience. “She said to them:”—Female speaker and plural audience—the complete, correct condition. We expect the best translation accuracy on this setup. “He/she said to him/her”—Here we set an (incorrect) singular gender-marked audience, to investigate our ability to control the audience morphology."
]
}
],
"annotation_id": [
"009c8fc90fb35c7b74dc9f8dd9a08acc0fa5ea25"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Table 1: BLEU results on the Silverman dataset",
"Figure 1: Gender inflection statistics for verbs governed by first-person pronouns.",
"Table 2: Comparison of our approach (using Google Translate) to Vanmassenhove et al. (2018) on their English-French gender corpus.",
"Table 3: Examples of languages where the speaker’s gender changes morphological markings in different languages, and translations using the prefix “He said:” or “She said:” accordingly"
],
"file": [
"3-Table1-1.png",
"4-Figure1-1.png",
"4-Table2-1.png",
"5-Table3-1.png"
]
} |
1807.00868 | Exploring End-to-End Techniques for Low-Resource Speech Recognition | In this work we present simple grapheme-based system for low-resource speech recognition using Babel data for Turkish spontaneous speech (80 hours). We have investigated different neural network architectures performance, including fully-convolutional, recurrent and ResNet with GRU. Different features and normalization techniques are compared as well. We also proposed CTC-loss modification using segmentation during training, which leads to improvement while decoding with small beam size. Our best model achieved word error rate of 45.8%, which is the best reported result for end-to-end systems using in-domain data for this task, according to our knowledge. | {
"section_name": [
"Introduction",
"Related work",
"Basic setup",
"Experiments with architecture",
"Loss modification: segmenting during training",
"Using different features",
"Varying model size and number of layers",
"Training the best model",
"Conclusions and future work",
"Acknowledgements"
],
"paragraphs": [
[
"Although development of the first speech recognition systems began half a century ago, there has been a significant increase of the accuracy of ASR systems and number of their applications for the recent ten years, even for low-resource languages BIBREF0 , BIBREF1 .",
"This is mainly due to widespread applying of deep learning and very effective performance of neural networks in hybrid recognition systems (DNN-HMM). However, for last few years there has been a trend to change traditional ASR training paradigm. End-to-end training systems gradually displace complex multistage learning process (including training of GMMs BIBREF2 , clustering of allophones’ states, aligning of speech to clustered senones, training neural networks with cross-entropy loss, followed by retraining with sequence-discriminative criterion). The new approach implies training the system in one global step, working only with acoustic data and reference texts, and significantly simplifies or even completely excludes in some cases the decoding process. It also avoids the problem of out-of-vocabulary words (OOV), because end-to-end system, trained with parts of the words as targets, can construct new words itself using graphemes or subword units, while traditional DNN-HMM systems are limited with language model vocabulary.",
"The whole variety of end-to-end systems can be divided into 3 main categories: Connectionist Temporal Classification (CTC) BIBREF3 ; Sequence-to-sequence models with attention mechanism BIBREF4 ; RNN-Transducers BIBREF5 .",
"Connectionist Temporal Classification (CTC) approach uses loss functions that utilize all possible alignments between reference text and audio data. Targets for CTC-based system can be phonemes, graphemes, syllables and other subword units and even whole words. However, a lot more data is usually required to train such systems well, compared to traditional hybrid systems.",
"Sequence-to-sequence models are used to map entire input sequences to output sequences without any assumptions about their alignment. The most popular architecture for sequence-to-sequence models is encoder-decoder model with attention. Encoder and decoder are usually constructed using recurrent neural networks, basic attention mechanism calculates energy weights that emphasize importance of encoder vectors for decoding on this step, and then sums all these vectors with energy weights. Encoder-decoder models with attention mechanism show results close to traditional DNN-HMM systems and in some cases surpass them, but for a number of reasons their usage is still rather limited. First of all, this is related to the fact, that such systems show best results when the duration of real utterances is close to the duration of utterances from training data. However, when the duration difference increases, the performance degrades significantly BIBREF4 .",
"Moreover, the entire utterance must be preprocessed by encoder before start of decoder's work. This is the reason, why it is hard to apply the approach to recognize long recordings or streaming audio. Segmenting long recordings into shorter utterances solves the duration issue, but leads to a context break, and eventually negatively affects recognition accuracy. Secondly, the computational complexity of encoder-decoder models is high because of recurrent networks usage, so these models are rather slow and hard to parallelize.",
"The idea of RNN-Transducer is an extension of CTC and provides the ability to model inner dependencies separately and jointly between elements of both input (audio frames) and output (phonemes and other subword units) sequences. Despite of mathematical elegance, such systems are very complicated and hard to implement, so they are still rarely used, although several impressive results were obtained using this technique.",
"CTC-based approach is easier to implement, better scaled and has many “degrees of freedom”, which allows to significantly improve baseline systems and achieve results close to state-of-the-art. Moreover, CTC-based systems are well compatible with traditional WFST-decoders and can be easily integrated with conventional ASR systems.",
"Besides, as already mentioned, CTC-systems are rather sensitive to the amount of training data, so it is very relevant to study how to build effective CTC-based recognition system using a small amount of training samples. It is especially actual for low-resource languages, where we have only a few dozen hours of speech. Building ASR system for low-resource languages is one of the aims of international Babel program, funded by the Intelligence Advanced Research Projects Activity (IARPA). Within the program extensive research was carried out, resulting in creation of a number of modern ASR systems for low-resource languages. Recently, end-to-end approaches were applied to this task, showing expectedly worse results than traditional systems, although the difference is rather small.",
"In this paper we explore a number of ways to improve end-to-end CTC-based systems in low-resource scenarios using the Turkish language dataset from the IARPA Babel collection. In the next section we describe in more details different versions of CTC-systems and their application for low-resource speech recognition. Section 3 describes the experiments and their results. Section 4 summarizes the results and discusses possible ways for further work."
],
[
"Development of CTC-based systems originates from the paper BIBREF3 where CTC loss was introduced. This loss is a total probability of labels sequence given observation sequence, which takes into account all possible alignments induced by a given words sequence.",
"Although a number of possible alignments increases exponentially with sequences’ lengths, there is an efficient algorithm to compute CTC loss based on dynamic programming principle (known as Forward-Backward algorithm). This algorithm operates with posterior probabilities of any output sequence element observation given the time frame and CTC loss is differentiable with respect to these probabilities.",
"Therefore, if an acoustic model is based on the neural network which estimates these posteriors, its training may be performed with a conventional error back-propagation gradient descent BIBREF6 . Training of ASR system based on such a model does not require an explicit alignment of input utterance to the elements of output sequence and thus may be performed in end-to-end fashion. It is also important that CTC loss accumulates the information about the whole output sequence, and hence its optimization is in some sense an alternative to the traditional fine-tuning of neural network acoustic models by means of sequence-discriminative criteria such as sMBR BIBREF7 etc. The implementation of CTC is conventionally based on RNN/LSTM networks, including bidirectional ones as acoustic models, since they are known to model long context effectively.",
"The important component of CTC is a special “blank” symbol which fills in gaps between meaningful elements of output sequence to equalize its length to the number of frames in the input sequence. It corresponds to a separate output neuron, and blank symbols are deleted from the recognized sequence to obtain the final result. In BIBREF8 a modification of CTC loss was proposed, referred as Auto SeGmentation criterion (ASG loss), which does not use blank symbols. Instead of using “blank”, a simple transition probability model for an output symbols is introduced. This leads to a significant simplification and speedup of computations. Moreover, the improved recognition results compared to basic CTC loss were obtained.",
"DeepSpeech BIBREF9 developed by Baidu Inc. was one of the first systems that demonstrated an effectiveness of CTC-based speech recognition in LVCSR tasks. Being trained on 2300 hours of English Conversational Telephone Speech data, it demonstrated state-of-the-art results on Hub5'00 evaluation set. Research in this direction continued and resulted in DeepSpeech2 architecture BIBREF10 , composed of both convolutional and recurrent layers. This system demonstrates improved accuracy of recognition of both English and Mandarin speech. Another successful example of applying CTC to LVCSR tasks is EESEN system BIBREF11 . It integrates an RNN-based model trained with CTC criterion to the conventional WFST-based decoder from the Kaldi toolkit BIBREF12 . The paper BIBREF13 shows that end-to-end systems may be successfully built from convolutional layers only instead of recurrent ones. It was demonstrated that using Gated Convolutional Units (GLU-CNNs) and training with ASG-loss leads to the state-of-the-art results on the LibriSpeech database (960 hours of training data).",
"Recently, a new modification of DeepSpeech2 architecture was proposed in BIBREF14 . Several lower convolutional layers were replaced with a deep residual network with depth-wise separable convolutions. This modification along with using strong regularization and data augmentation techniques leads to the results close to DeepSpeech2 in spite of significantly lower amount of data used for training. Indeed, one of the models was trained with only 80 hours of speech data (which were augmented with noisy and speed-perturbed versions of original data).",
"These results suggest that CTC can be successfully applied for the training of ASR systems for low-resource languages, in particular, for those included in Babel research program (the amount of training data for them is normally 40 to 80 hours of speech).",
"Currently, Babel corpus contains data for more than 20 languages, and for most of them quite good traditional ASR system were built BIBREF15 , BIBREF16 , BIBREF17 . In order to improve speech recognition accuracy for a given language, data from other languages is widely used as well. It can be used to train multilingual system via multitask learning or to obtain high-level multilingual representations, usually bottleneck features, extracted from a pre-trained multilingual network.",
"One of the first attempts to build ASR system for low-resource BABEL languages using CTC-based end-to-end training was made recently BIBREF18 . Despite the obtained results are somewhat worse compared to the state-of-the-art traditional systems, they still demonstrate that CTC-based approach is viable for building low-resource ASR systems. The aim of our work is to investigate some ways to improve the obtained results."
],
[
"For all experiments we used conversational speech from IARPA Babel Turkish Language Pack (LDC2016S10). This corpus contains about 80 hours of transcribed speech for training and 10 hours for development. The dataset is rather small compared to widely used benchmarks for conversational speech: English Switchboard corpus (300 hours, LDC97S62) and Fisher dataset (2000 hours, LDC2004S13 and LDC2005S13).",
"As targets we use 32 symbols: 29 lowercase characters of Turkish alphabet BIBREF19 , apostrophe, space and special 〈blank〉 character that means “no output”. Thus we do not use any prior linguistic knowledge and also avoid OOV problem as the system can construct new words directly.",
"All models are trained with CTC-loss. Input features are 40 mel-scaled log filterbank enegries (FBanks) computed every 10 ms with 25 ms window, concatenated with deltas and delta-deltas (120 features in vector). We also tried to use spectrogram and experimented with different normalization techniques.",
"For decoding we used character-based beam search BIBREF20 with 3-gram language model build with SRILM package BIBREF21 finding sequence of characters INLINEFORM0 that maximizes the following objective BIBREF9 : INLINEFORM1 ",
"where INLINEFORM0 is language model weight and INLINEFORM1 is word insertion penalty.",
"For all experiments we used INLINEFORM0 , INLINEFORM1 , and performed decoding with beam width equal to 100 and 2000, which is not very large compared to 7000 and more active hypotheses used in traditional WFST decoders (e.g. many Kaldi recipes do decoding with INLINEFORM2 ).",
"To compare with other published results BIBREF18 , BIBREF22 we used Sclite BIBREF23 scoring package to measure results of decoding with beam width 2000, that takes into account incomplete words and spoken noise in reference texts and doesn't penalize model if it incorrectly recognize these pieces.",
"Also we report WER (word error rate) for simple argmax decoder (taking labels with maximum output on each time step and than applying CTC decoding rule – collapse repeated labels and remove “blanks”)."
],
[
"We tried to explore the behavior of different neural network architectures in case when rather small data is available. We used multi-layer bidirectional LSTM networks, tried fully-convolutional architecture similar to Wav2Letter BIBREF8 and explored DeepSpeech-like architecture developed by Salesforce (DS-SF) BIBREF14 .",
"The convolutional model consists of 11 convolutional layers with batch normalization after each layer. The DeepSpeech-like architecture consists of 5-layers residual network with depth-wise separable convolutions followed by 4-layer bidirectional Gated Recurrent Unit (GRU) as described in BIBREF14 .",
"Our baseline bidirectional LSTM is 6-layers network with 320 hidden units per direction as in BIBREF18 . Also we tried to use bLSTM to label every second frame (20 ms) concatenating every first output from first layer with second and taking this as input for second model layer.",
"The performance of our baseline models is shown in Table TABREF6 ."
],
[
"It is known that CTC-loss is very unstable for long utterances BIBREF3 , and smaller utterances are more useful for this task. Some techniques were developed to help model converge faster, e.g. sortagrad BIBREF10 (using shorter segments at the beginning of training).",
"To compute CTC-loss we use all possible alignments between audio features and reference text, but only some of the alignments make sense. Traditional DNN-HMM systems also use iterative training with finding best alignment and then training neural network to approximate this alignment. Therefore, we propose the following algorithm to use segmentation during training:",
"compute CTC-alignment (find the sequence of targets with minimal loss that can be mapped to real targets by collapsing repeated characters and removing blanks)",
"perform greedy decoding (argmax on each step)",
"find “well-recognized” words with INLINEFORM0 ( INLINEFORM1 is a hyperparameter): segment should start and end with space; word is “well-recognized” when argmax decoding is equal to computed alignment",
"if the word is “well-recognized”, divide the utterance into 5 segments: left segment before space, left space, the word, right space and right segment",
"compute CTC-loss for all this segments separately and do back-propagation as usual",
"The results of training with this criterion are shown in Table TABREF13 . The proposed criterion doesn't lead to consistent improvement while decoding with large beam width (2000), but shows significant improvement when decoding with smaller beam (100). We plan to further explore utilizing alignment information during training."
],
[
"We explored different normalization techniques. FBanks with cepstral mean normalization (CMN) perform better than raw FBanks. We found using variance with mean normalization (CMVN) unnecessary for the task. Using deltas and delta-deltas improves model, so we used them in other experiments. Models trained with spectrogram features converge slower and to worse minimum, but the difference when using CMN is not very big compared to FBanks."
],
[
"Experiments with varying number of hidden units of 6-layer bLSTM models are presented in Table TABREF17 . Models with 512 and 768 hidden units are worse than with 320, but model with 1024 hidden units is significantly better than others. We also observed that model with 6 layers performs better than others."
],
[
"To train our best model we chose the best network from our experiments (6-layer bLSTM with 1024 hidden units), trained it with Adam optimizer and fine-tuned with SGD with momentum using exponential learning rate decay. The best model trained with speed and volume perturbation BIBREF24 achieved 45.8% WER, which is the best published end-to-end result on Babel Turkish dataset using in-domain data. For comparison, WER of model trained using in-domain data in BIBREF18 is 53.1%, using 4 additional languages (including English Switchboard dataset) – 48.7%. It is also not far from Kaldi DNN-HMM system BIBREF22 with 43.8% WER."
],
[
"In this paper we explored different end-to-end architectures in low-resource ASR task using Babel Turkish dataset. We considered different ways to improve performance and proposed promising CTC-loss modification that uses segmentation during training. Our final system achieved 45.8% WER using in-domain data only, which is the best published result for Turkish end-to-end systems. Our work also shows than well-tuned end-to-end system can achieve results very close to traditional DNN-HMM systems even for low-resource languages. In future work we plan to further investigate different loss modifications (Gram-CTC, ASG) and try to use RNN-Transducers and multi-task learning."
],
[
"This work was financially supported by the Ministry of Education and Science of the Russian Federation, Contract 14.575.21.0132 (IDRFMEFI57517X0132)."
]
]
} | {
"question": [
"What normalization techniques are mentioned?",
"What features do they experiment with?",
"Which architecture is their best model?",
"What kind of spontaneous speech is used?"
],
"question_id": [
"d20d6c8ecd7cb0126479305d27deb0c8b642b09f",
"11e6b79f1f48ddc6c580c4d0a3cb9bcb42decb17",
"2677b88c2def3ed94e25a776599555a788d197f2",
"8ca31caa34cc5b65dc1d01d0d1f36bf8c4928805"
],
"nlp_background": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"FBanks with cepstral mean normalization (CMN)",
"variance with mean normalization (CMVN)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We explored different normalization techniques. FBanks with cepstral mean normalization (CMN) perform better than raw FBanks. We found using variance with mean normalization (CMVN) unnecessary for the task. Using deltas and delta-deltas improves model, so we used them in other experiments. Models trained with spectrogram features converge slower and to worse minimum, but the difference when using CMN is not very big compared to FBanks."
],
"highlighted_evidence": [
"We explored different normalization techniques. FBanks with cepstral mean normalization (CMN) perform better than raw FBanks. We found using variance with mean normalization (CMVN) unnecessary for the task. Using deltas and delta-deltas improves model, so we used them in other experiments. Models trained with spectrogram features converge slower and to worse minimum, but the difference when using CMN is not very big compared to FBanks."
]
}
],
"annotation_id": [
"00bd2818def152371948b5d8a7db86752c8cd5fa"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"40 mel-scaled log filterbank enegries (FBanks) computed every 10 ms with 25 ms window",
"deltas and delta-deltas (120 features in vector)",
"spectrogram"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"All models are trained with CTC-loss. Input features are 40 mel-scaled log filterbank enegries (FBanks) computed every 10 ms with 25 ms window, concatenated with deltas and delta-deltas (120 features in vector). We also tried to use spectrogram and experimented with different normalization techniques."
],
"highlighted_evidence": [
"Input features are 40 mel-scaled log filterbank enegries (FBanks) computed every 10 ms with 25 ms window, concatenated with deltas and delta-deltas (120 features in vector). We also tried to use spectrogram and experimented with different normalization techniques."
]
}
],
"annotation_id": [
"8b1250f94f17845fabede594ed1c8464c50e1422"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"6-layer bLSTM with 1024 hidden units"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"To train our best model we chose the best network from our experiments (6-layer bLSTM with 1024 hidden units), trained it with Adam optimizer and fine-tuned with SGD with momentum using exponential learning rate decay. The best model trained with speed and volume perturbation BIBREF24 achieved 45.8% WER, which is the best published end-to-end result on Babel Turkish dataset using in-domain data. For comparison, WER of model trained using in-domain data in BIBREF18 is 53.1%, using 4 additional languages (including English Switchboard dataset) – 48.7%. It is also not far from Kaldi DNN-HMM system BIBREF22 with 43.8% WER."
],
"highlighted_evidence": [
"To train our best model we chose the best network from our experiments (6-layer bLSTM with 1024 hidden units), trained it with Adam optimizer and fine-tuned with SGD with momentum using exponential learning rate decay."
]
}
],
"annotation_id": [
"09f0651afe10ee0e4e1c80edb1357f3edc913301"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"985adf461dea16570d09572e8acd53135376abe2"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Fig. 1: Architectures",
"Table 1: Baseline models trained with CTC-loss",
"Table 2: Models trained with CTC and proposed CTC modification",
"Table 3: 6-layers bLSTM trained using different features and normalization",
"Table 4: Comparison of bLSTM models with different number of hidden units.",
"Table 5: Using data augmentation and finetuning with SGD"
],
"file": [
"5-Figure1-1.png",
"6-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"8-Table4-1.png",
"9-Table5-1.png"
]
} |
1909.13375 | Tag-based Multi-Span Extraction in Reading Comprehension | With models reaching human performance on many popular reading comprehension datasets in recent years, a new dataset, DROP, introduced questions that were expected to present a harder challenge for reading comprehension models. Among these new types of questions were "multi-span questions", questions whose answers consist of several spans from either the paragraph or the question itself. Until now, only one model attempted to tackle multi-span questions as a part of its design. In this work, we suggest a new approach for tackling multi-span questions, based on sequence tagging, which differs from previous approaches for answering span questions. We show that our approach leads to an absolute improvement of 29.7 EM and 15.1 F1 compared to existing state-of-the-art results, while not hurting performance on other question types. Furthermore, we show that our model slightly eclipses the current state-of-the-art results on the entire DROP dataset. | {
"section_name": [
"Introduction",
"Related Work",
"Model",
"Model ::: NABERT+",
"Model ::: NABERT+ ::: Heads Shared with NABERT+",
"Model ::: Multi-Span Head",
"Model ::: Objective and Training",
"Model ::: Objective and Training ::: Multi-Span Head Training Objective",
"Model ::: Objective and Training ::: Multi-Span Head Correct Tag Sequences",
"Model ::: Objective and Training ::: Dealing with too Many Correct Tag Sequences",
"Model ::: Tag Sequence Prediction with the Multi-Span Head",
"Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Viterbi Decoding",
"Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Beam Search",
"Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Greedy Tagging",
"Preprocessing",
"Preprocessing ::: Simple Preprocessing ::: Improved Textual Parsing",
"Preprocessing ::: Simple Preprocessing ::: Improved Handling of Numbers",
"Preprocessing ::: Using NER for Cleaning Up Multi-Span Questions",
"Training",
"Results and Discussion ::: Performance on DROP's Development Set",
"Results and Discussion ::: Performance on DROP's Development Set ::: Comparison to the NABERT+ Baseline",
"Results and Discussion ::: Performance on DROP's Development Set ::: Comparison to MTMSN",
"Results and Discussion ::: Performance on DROP's Test Set",
"Results and Discussion ::: Ablation Studies",
"Conclusion",
"Future Work ::: A Different Loss for Multi-span Questions",
"Future Work ::: Explore Utilization of Non-First Wordpiece Sub-Tokens"
],
"paragraphs": [
[
"The task of reading comprehension, where systems must understand a single passage of text well enough to answer arbitrary questions about it, has seen significant progress in the last few years. With models reaching human performance on the popular SQuAD dataset BIBREF0, and with much of the most popular reading comprehension datasets having been solved BIBREF1, BIBREF2, a new dataset, DROP BIBREF3, was recently published.",
"DROP aimed to present questions that require more complex reasoning in order to answer than that of previous datasets, in a hope to push the field towards a more comprehensive analysis of paragraphs of text. In addition to questions whose answers are a single continuous span from the paragraph text (questions of a type already included in SQuAD), DROP introduced additional types of questions. Among these new types were questions that require simple numerical reasoning, i.e questions whose answer is the result of a simple arithmetic expression containing numbers from the passage, and questions whose answers consist of several spans taken from the paragraph or the question itself, what we will denote as \"multi-span questions\".",
"Of all the existing models that tried to tackle DROP, only one model BIBREF4 directly targeted multi-span questions in a manner that wasn't just a by-product of the model's overall performance. In this paper, we propose a new method for tackling multi-span questions. Our method takes a different path from that of the aforementioned model. It does not try to generalize the existing approach for tackling single-span questions, but instead attempts to attack this issue with a new, tag-based, approach."
],
[
"Numerically-aware QANet (NAQANet) BIBREF3 was the model released with DROP. It uses QANET BIBREF5, at the time the best-performing published model on SQuAD 1.1 BIBREF0 (without data augmentation or pretraining), as the encoder. On top of QANET, NAQANet adds four different output layers, which we refer to as \"heads\". Each of these heads is designed to tackle a specific question type from DROP, where these types where identified by DROP's authors post-creation of the dataset. These four heads are (1) Passage span head, designed for producing answers that consist of a single span from the passage. This head deals with the type of questions already introduced in SQuAD. (2) Question span head, for answers that consist of a single span from the question. (3) Arithmetic head, for answers that require adding or subtracting numbers from the passage. (4) Count head, for answers that require counting and sorting entities from the text. In addition, to determine which head should be used to predict an answer, a 4-way categorical variable, as per the number of heads, is trained. We denote this categorical variable as the \"head predictor\".",
"Numerically-aware BERT (NABERT+) BIBREF6 introduced two main improvements over NAQANET. The first was to replace the QANET encoder with BERT. This change alone resulted in an absolute improvement of more than eight points in both EM and F1 metrics. The second improvement was to the arithmetic head, consisting of the addition of \"standard numbers\" and \"templates\". Standard numbers were predefined numbers which were added as additional inputs to the arithmetic head, regardless of their occurrence in the passage. Templates were an attempt to enrich the head's arithmetic capabilities, by adding the ability of doing simple multiplications and divisions between up to three numbers.",
"MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. Their approach consisted of two parts. The first was to train a dedicated categorical variable to predict the number of spans to extract. The second was to generalize the single-span head method of extracting a span, by utilizing the non-maximum suppression (NMS) algorithm BIBREF7 to find the most probable set of non-overlapping spans. The number of spans to extract was determined by the aforementioned categorical variable.",
"Additionally, MTMSN introduced two new other, non span-related, components. The first was a new \"negation\" head, meant to deal with questions deemed as requiring logical negation (e.g. \"How many percent were not German?\"). The second was improving the arithmetic head by using beam search to re-rank candidate arithmetic expressions."
],
[
"Problem statement. Given a pair $(x^P,x^Q)$ of a passage and a question respectively, both comprised of tokens from a vocabulary $V$, we wish to predict an answer $y$. The answer could be either a collection of spans from the input, or a number, supposedly arrived to by performing arithmetic reasoning on the input. We want to estimate $p(y;x^P,x^Q)$.",
"The basic structure of our model is shared with NABERT+, which in turn is shared with that of NAQANET (the model initially released with DROP). Consequently, meticulously presenting every part of our model would very likely prove redundant. As a reasonable compromise, we will introduce the shared parts with more brevity, and will go into greater detail when presenting our contributions."
],
[
"Assume there are $K$ answer heads in the model and their weights denoted by $\\theta $. For each pair $(x^P,x^Q)$ we assume a latent categorical random variable $z\\in \\left\\lbrace 1,\\ldots \\,K\\right\\rbrace $ such that the probability of an answer $y$ is",
"where each component of the mixture corresponds to an output head such that",
"Note that a head is not always capable of producing the correct answer $y_\\text{gold}$ for each type of question, in which case $p\\left(y_\\text{gold} \\vert z ; x^{P},x^{Q},\\theta \\right)=0$. For example, the arithmetic head, whose output is always a single number, cannot possibly produce a correct answer for a multi-span question.",
"For a multi-span question with an answer composed of $l$ spans, denote $y_{{\\text{gold}}_{\\textit {MS}}}=\\left\\lbrace y_{{\\text{gold}}_1}, \\ldots , y_{{\\text{gold}}_l} \\right\\rbrace $. NAQANET and NABERT+ had no head capable of outputting correct answers for multi-span questions. Instead of ignoring them in training, both models settled on using \"semi-correct answers\": each $y_\\text{gold} \\in y_{{\\text{gold}}_{\\textit {MS}}}$ was considered to be a correct answer (only in training). By deliberately encouraging the model to provide partial answers for multi-span questions, they were able to improve the corresponding F1 score. As our model does have a head with the ability to answer multi-span questions correctly, we didn't provide the aforementioned semi-correct answers to any of the other heads. Otherwise, we would have skewed the predictions of the head predictor and effectively mislead the other heads to believe they could predict correct answers for multi-span questions."
],
[
"Before going over the answer heads, two additional components should be introduced - the summary vectors, and the head predictor.",
"Summary vectors. The summary vectors are two fixed-size learned representations of the question and the passage, which serve as an input for some of the heads. To create the summary vectors, first define $\\mathbf {T}$ as BERT's output on a $(x^{P},x^{Q})$ input. Then, let $\\mathbf {T}^{P}$ and $\\mathbf {T}^{Q}$ be subsequences of T that correspond to $x^P$ and $x^Q$ respectively. Finally, let us also define Bdim as the dimension of the tokens in $\\mathbf {T}$ (e.g 768 for BERTbase), and have $\\mathbf {W}^P \\in \\mathbb {R}^\\texttt {Bdim}$ and $\\mathbf {W}^Q \\in \\mathbb {R}^\\texttt {Bdim}$ as learned linear layers. Then, the summary vectors are computed as:",
"Head predictor. A learned categorical variable with its number of outcomes equal to the number of answer heads in the model. Used to assign probabilities for using each of the heads in prediction.",
"where FFN is a two-layer feed-forward network with RELU activation.",
"Passage span. Define $\\textbf {W}^S \\in \\mathbb {R}^\\texttt {Bdim}$ and $\\textbf {W}^E \\in \\mathbb {R}^\\texttt {Bdim}$ as learned vectors. Then the probabilities of the start and end positions of a passage span are computed as",
"Question span. The probabilities of the start and end positions of a question span are computed as",
"where $\\textbf {e}^{|\\textbf {T}^Q|}\\otimes \\textbf {h}^P$ repeats $\\textbf {h}^P$ for each component of $\\textbf {T}^Q$.",
"Count. Counting is treated as a multi-class prediction problem with the numbers 0-9 as possible labels. The label probabilities are computed as",
"Arithmetic. As in NAQNET, this head obtains all of the numbers from the passage, and assigns a plus, minus or zero (\"ignore\") for each number. As BERT uses wordpiece tokenization, some numbers are broken up into multiple tokens. Following NABERT+, we chose to represent each number by its first wordpiece. That is, if $\\textbf {N}^i$ is the set of tokens corresponding to the $i^\\text{th}$ number, we define a number representation as $\\textbf {h}_i^N = \\textbf {N}^i_0$.",
"The selection of the sign for each number is a multi-class prediction problem with options $\\lbrace 0, +, -\\rbrace $, and the probabilities for the signs are given by",
"As for NABERT+'s two additional arithmetic features, we decided on using only the standard numbers, as the benefits from using templates were deemed inconclusive. Note that unlike the single-span heads, which are related to our introduction of a multi-span head, the arithmetic and count heads were not intended to play a significant role in our work. We didn't aim to improve results on these types of questions, perhaps only as a by-product of improving the general reading comprehension ability of our model."
],
[
"A subset of questions that wasn't directly dealt with by the base models (NAQANET, NABERT+) is questions that have an answer which is composed of multiple non-continuous spans. We suggest a head that will be able to deal with both single-span and multi-span questions.",
"To model an answer which is a collection of spans, the multi-span head uses the $\\mathtt {BIO}$ tagging format BIBREF8: $\\mathtt {B}$ is used to mark the beginning of a span, $\\mathtt {I}$ is used to mark the inside of a span and $\\mathtt {O}$ is used to mark tokens not included in a span. In this way, we get a sequence of chunks that can be decoded to a final answer - a collection of spans.",
"As words are broken up by the wordpiece tokenization for BERT, we decided on only considering the representation of the first sub-token of the word to tag, following the NER task from BIBREF2.",
"For the $i$-th token of an input, the probability to be assigned a $\\text{tag} \\in \\left\\lbrace {\\mathtt {B},\\mathtt {I},\\mathtt {O}} \\right\\rbrace $ is computed as"
],
[
"To train our model, we try to maximize the log-likelihood of the correct answer $p(y_\\text{gold};x^{P},x^{Q},\\theta )$ as defined in Section SECREF2. If no head is capable of predicting the gold answer, the sample is skipped.",
"We enumerate over every answer head $z\\in \\left\\lbrace \\textit {PS}, \\textit {QS}, \\textit {C}, \\textit {A}, \\textit {MS}\\right\\rbrace $ (Passage Span, Question Span, Count, Arithmetic, Multi-Span) to compute each of the objective's addends:",
"Note that we are in a weakly supervised setup: the answer type is not given, and neither is the correct arithmetic expression required for deriving some answers. Therefore, it is possible that $y_\\text{gold}$ could be derived by more than one way, even from the same head, with no indication of which is the \"correct\" one.",
"We use the weakly supervised training method used in NABERT+ and NAQANET. Based on BIBREF9, for each head we find all the executions that evaluate to the correct answer and maximize their marginal likelihood .",
"For a datapoint $\\left(y, x^{P}, x^{Q} \\right)$ let $\\chi ^z$ be the set of all possible ways to get $y$ for answer head $z\\in \\left\\lbrace \\textit {PS}, \\textit {QS}, \\textit {C}, \\textit {A}, \\textit {MS}\\right\\rbrace $. Then, as in NABERT+, we have",
"Finally, for the arithmetic head, let $\\mu $ be the set of all the standard numbers and the numbers from the passage, and let $\\mathbf {\\chi }^{\\textit {A}}$ be the set of correct sign assignments to these numbers. Then, we have"
],
[
"Denote by ${\\chi }^{\\textit {MS}}$ the set of correct tag sequences. If the concatenation of a question and a passage is $m$ tokens long, then denote a correct tag sequence as $\\left(\\text{tag}_1,\\ldots ,\\text{tag}_m\\right)$.",
"We approximate the likelihood of a tag sequence by assuming independence between the sequence's positions, and multiplying the likelihoods of all the correct tags in the sequence. Then, we have"
],
[
"Since a given multi-span answer is a collection of spans, it is required to obtain its matching tag sequences in order to compute the training objective.",
"In what we consider to be a correct tag sequence, each answer span will be marked at least once. Due to the weakly supervised setup, we consider all the question/passage spans that match the answer spans as being correct. To illustrate, consider the following simple example. Given the text \"X Y Z Z\" and the correct multi-span answer [\"Y\", \"Z\"], there are three correct tag sequences: $\\mathtt {O\\,B\\,B\\,B}$,$\\quad $ $\\mathtt {O\\,B\\,B\\,O}$,$\\quad $ $\\mathtt {O\\,B\\,O\\,B}$."
],
[
"The number of correct tag sequences can be expressed by",
"where $s$ is the number of spans in the answer and $\\#_i$ is the number of times the $i^\\text{th}$ span appears in the text.",
"For questions with a reasonable amount of correct tag sequences, we generate all of them before the training starts. However, there is a small group of questions for which the amount of such sequences is between 10,000 and 100,000,000 - too many to generate and train on. In such cases, inspired by BIBREF9, instead of just using an arbitrary subset of the correct sequences, we use beam search to generate the top-k predictions of the training model, and then filter out the incorrect sequences. Compared to using an arbitrary subset, using these sequences causes the optimization to be done with respect to answers more compatible with the model. If no correct tag sequences were predicted within the top-k, we use the tag sequence that has all of the answer spans marked."
],
[
"Based on the outputs $\\textbf {p}_{i}^{{\\text{tag}}_{i}}$ we would like to predict the most likely sequence given the $\\mathtt {BIO}$ constraints. Denote $\\textit {validSeqs}$ as the set of all $\\mathtt {BIO}$ sequences of length $m$ that are valid according to the rules specified in Section SECREF5. The $\\mathtt {BIO}$ tag sequence to predict is then",
"We considered the following approaches:"
],
[
"A natural candidate for getting the most likely sequence is Viterbi decoding, BIBREF10 with transition probabilities learned by a $\\mathtt {BIO}$ constrained Conditional Random Field (CRF) BIBREF11. However, further inspection of our sequence's properties reveals that such a computational effort is probably not necessary, as explained in following paragraphs."
],
[
"Due to our use of $\\mathtt {BIO}$ tags and their constraints, observe that past tag predictions only affect future tag predictions from the last $\\mathtt {B}$ prediction and as long as the best tag to predict is $\\mathtt {I}$. Considering the frequency and length of the correct spans in the question and the passage, effectively there's no effect of past sequence's positions on future ones, other than a very few positions ahead. Together with the fact that at each prediction step there are no more than 3 tags to consider, it means using beam search to get the most likely sequence is very reasonable and even allows near-optimal results with small beam width values."
],
[
"Notice that greedy tagging does not enforce the $\\mathtt {BIO}$ constraints. However, since the multi-span head's training objective adheres to the $\\mathtt {BIO}$ constraints via being given the correct tag sequences, we can expect that even with greedy tagging the predictions will mostly adhere to these constraints as well. In case there are violations, their amendment is required post-prediction. Albeit faster, greedy tagging resulted in a small performance hit, as seen in Table TABREF26."
],
[
"We tokenize the passage, question, and all answer texts using the BERT uncased wordpiece tokenizer from huggingface. The tokenization resulting from each $(x^P,x^Q)$ input pair is truncated at 512 tokens so it can be fed to BERT as an input. However, before tokenizing the dataset texts, we perform additional preprocessing as listed below."
],
[
"The raw dataset included almost a thousand of HTML entities that did not get parsed properly, e.g \" \" instead of a simple space. In addition, we fixed some quirks that were introduced by the original Wikipedia parsing method. For example, when encountering a reference to an external source that included a specific page from that reference, the original parser ended up introducing a redundant \":<PAGE NUMBER>\" into the parsed text."
],
[
"Although we previously stated that we aren't focusing on improving arithmetic performance, while analyzing the training process we encountered two arithmetic-related issues that could be resolved rather quickly: a precision issue and a number extraction issue. Regarding precision, we noticed that while either generating expressions for the arithmetic head, or using the arithmetic head to predict a numeric answer, the value resulting from an arithmetic operation would not always yield the exact result due to floating point precision limitations. For example, $5.8 + 6.6 = 12.3999...$ instead of $12.4$. This issue has caused a significant performance hit of about 1.5 points for both F1 and EM and was fixed by simply rounding numbers to 5 decimal places, assuming that no answer requires a greater precision. Regarding number extraction, we noticed that some numeric entities, required in order to produce a correct answer, weren't being extracted from the passage. Examples include ordinals (121st, 189th) and some \"per-\" units (1,580.7/km2, 1050.95/month)."
],
[
"The training dataset contains multi-span questions with answers that are clearly incorrect, with examples shown in Table TABREF22. In order to mitigate this, we applied an answer-cleaning technique using a pretrained Named Entity Recognition (NER) model BIBREF12 in the following manner: (1) Pre-define question prefixes whose answer spans are expected to contain only a specific entity type and filter the matching questions. (2) For a given answer of a filtered question, remove any span that does not contain at least one token of the expected type, where the types are determined by applying the NER model on the passage. For example, if a question starts with \"who scored\", we expect that any valid span will include a person entity ($\\mathtt {PER}$). By applying such rules, we discovered that at least 3% of the multi-span questions in the training dataset included incorrect spans. As our analysis of prefixes wasn't exhaustive, we believe that this method could yield further gains. Table TABREF22 shows a few of our cleaning method results, where we perfectly clean the first two questions, and partially clean a third question."
],
[
"The starting point for our implementation was the NABERT+ model, which in turn was based on allenai's NAQANET. Our implementation can be found on GitHub. All three models utilize the allennlp framework. The pretrained BERT models were supplied by huggingface. For our base model we used bert-base-uncased. For our large models we used the standard bert-large-uncased-whole-word-masking and the squad fine-tuned bert-large-uncased- whole-word-masking-finetuned-squad.",
"Due to limited computational resources, we did not perform any hyperparameter searching. We preferred to focus our efforts on the ablation studies, in hope to gain further insights on the effect of the components that we ourselves introduced. For ease of performance comparison, we followed NABERT+'s training settings: we used the BERT Adam optimizer from huggingface with default settings and a learning rate of $1e^{-5}$. The only difference was that we used a batch size of 12. We trained our base model for 20 epochs. For the large models we used a batch size of 3 with a learning rate of $5e^{-6}$ and trained for 5 epochs, except for the model without the single-span heads that was trained with a batch size of 2 for 7 epochs. F1 was used as our validation metric. All models were trained on a single GPU with 12-16GB of memory."
],
[
"Table TABREF24 shows the results on DROP's development set. Compared to our base models, our large models exhibit a substantial improvement across all metrics."
],
[
"We can see that our base model surpasses the NABERT+ baseline in every metric. The major improvement in multi-span performance was expected, as our multi-span head was introduced specifically to tackle this type of questions. For the other types, most of the improvement came from better preprocessing. A more detailed discussion could be found in Section (SECREF36)."
],
[
"Notice that different BERTlarge models were used, so the comparison is less direct. Overall, our large models exhibits similar results to those of MTMSNlarge.",
"For multi-span questions we achieve a significantly better performance. While a breakdown of metrics was only available for MTMSNlarge, notice that even when comparing these metrics to our base model, we still achieve a 12.2 absolute improvement in EM, and a 2.3 improvement in F1. All that, while keeping in mind we compare a base model to a large model (for reference, note the 8 point improvement between MTMSNbase and MTMSNlarge in both EM and F1). Our best model, large-squad, exhibits a huge improvement of 29.7 in EM and 15.1 in F1 compared to MTMSNlarge.",
"When comparing single-span performance, our best model exhibits slightly better results, but it should be noted that it retains the single-span heads from NABERT+, while in MTMSN they have one head to predict both single-span and multi-span answers. For a fairer comparison, we trained our model with the single-span heads removed, where our multi-span head remained the only head aimed for handling span questions. With this no-single-span-heads setting, while our multi-span performance even improved a bit, our single-span performance suffered a slight drop, ending up trailing by 0.8 in EM and 0.6 in F1 compared to MTMSN. Therefore, it could prove beneficial to try and analyze the reasons behind each model's (ours and MTMSN) relative advantages, and perhaps try to combine them into a more holistic approach of tackling span questions."
],
[
"Table TABREF25 shows the results on DROP's test set, with our model being the best overall as of the time of writing, and not just on multi-span questions."
],
[
"In order to analyze the effect of each of our changes, we conduct ablation studies on the development set, depicted in Table TABREF26.",
"Not using the simple preprocessing from Section SECREF17 resulted in a 2.5 point decrease in both EM and F1. The numeric questions were the most affected, with their performance dropping by 3.5 points. Given that number questions make up about 61% of the dataset, we can deduce that our improved number handling is responsible for about a 2.1 point gain, while the rest could be be attributed to the improved Wikipedia parsing.",
"Although NER span cleaning (Section SECREF23) affected only 3% of the multi-span questions, it provided a solid improvement of 5.4 EM in multi-span questions and 1.5 EM in single-span questions. The single-span improvement is probably due to the combination of better multi-span head learning as a result of fixing multi-span questions and the fact that the multi-span head can answer single-span questions as well.",
"Not using the single-span heads results in a slight drop in multi-span performance, and a noticeable drop in single-span performance. However when performing the same comparison between our large models (see Table TABREF24), this performance gap becomes significantly smaller.",
"As expected, not using the multi-span head causes the multi-span performance to plummet. Note that for this ablation test the single-span heads were permitted to train on multi-span questions.",
"Compared to using greedy decoding in the prediction of multi-span questions, using beam search results in a small improvement. We used a beam with of 5, and didn't perform extensive tuning of the beam width."
],
[
"In this work, we introduced a new approach for tackling multi-span questions in reading comprehension datasets. This approach is based on individually tagging each token with a categorical tag, relying on the tokens' contextual representation to bridge the information gap resulting from the tokens being tagged individually.",
"First, we show that integrating this new approach into an existing model, NABERT+, does not hinder performance on other questions types, while substantially improving the results on multi-span questions. Later, we compare our results to the current state-of-the-art on multi-span questions. We show that our model has a clear advantage in handling multi-span questions, with a 29.7 absolute improvement in EM, and a 15.1 absolute improvement in F1. Furthermore, we show that our model slightly eclipses the current state-of-the-art results on the entire DROP dataeset. Finally, we present some ablation studies, analyzing the benefit gained from individual components of our model.",
"We believe that combining our tag-based approach for handling multi-span questions with current successful techniques for handling single-span questions could prove beneficial in finding better, more holistic ways, of tackling span questions in general."
],
[
"Currently, For each individual span, we optimize the average likelihood over all its possible tag sequences (see Section SECREF9). A different approach could be not taking each possible tag sequence into account but only the most likely one. This could provide the model more flexibility during training and the ability to focus on the more \"correct\" tag sequences."
],
[
"As mentioned in Section SECREF5, we only considered the representation of the first wordpiece sub-token in our model. It would be interesting to see how different approaches to utilize the other sub-tokens' representations in the tagging task affect the results."
]
]
} | {
"question": [
"What approach did previous models use for multi-span questions?",
"How they use sequence tagging to answer multi-span questions?",
"What is difference in peformance between proposed model and state-of-the art on other question types?",
"What is the performance of proposed model on entire DROP dataset?",
"What is the previous model that attempted to tackle multi-span questions as a part of its design?"
],
"question_id": [
"9ab43f941c11a4b09a0e4aea61b4a5b4612e7933",
"5a02a3dd26485a4e4a77411b50b902d2bda3731b",
"579941de2838502027716bae88e33e79e69997a6",
"9a65cfff4d99e4f9546c72dece2520cae6231810",
"a9def7958eac7b9a780403d4f136927f756bab83"
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
"",
""
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Only MTMSM specifically tried to tackle the multi-span questions. Their approach consisted of two parts: first train a dedicated categorical variable to predict the number of spans to extract and the second was to generalize the single-span head method of extracting a span",
"evidence": [
"MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. Their approach consisted of two parts. The first was to train a dedicated categorical variable to predict the number of spans to extract. The second was to generalize the single-span head method of extracting a span, by utilizing the non-maximum suppression (NMS) algorithm BIBREF7 to find the most probable set of non-overlapping spans. The number of spans to extract was determined by the aforementioned categorical variable."
],
"highlighted_evidence": [
"MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. Their approach consisted of two parts. The first was to train a dedicated categorical variable to predict the number of spans to extract. The second was to generalize the single-span head method of extracting a span, by utilizing the non-maximum suppression (NMS) algorithm BIBREF7 to find the most probable set of non-overlapping spans. The number of spans to extract was determined by the aforementioned categorical variable"
]
}
],
"annotation_id": [
"eb32830971e006411f8136f81ff218c63213dc22"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"To model an answer which is a collection of spans, the multi-span head uses the $\\mathtt {BIO}$ tagging format BIBREF8: $\\mathtt {B}$ is used to mark the beginning of a span, $\\mathtt {I}$ is used to mark the inside of a span and $\\mathtt {O}$ is used to mark tokens not included in a span"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"To model an answer which is a collection of spans, the multi-span head uses the $\\mathtt {BIO}$ tagging format BIBREF8: $\\mathtt {B}$ is used to mark the beginning of a span, $\\mathtt {I}$ is used to mark the inside of a span and $\\mathtt {O}$ is used to mark tokens not included in a span. In this way, we get a sequence of chunks that can be decoded to a final answer - a collection of spans."
],
"highlighted_evidence": [
"To model an answer which is a collection of spans, the multi-span head uses the $\\mathtt {BIO}$ tagging format BIBREF8: $\\mathtt {B}$ is used to mark the beginning of a span, $\\mathtt {I}$ is used to mark the inside of a span and $\\mathtt {O}$ is used to mark tokens not included in a span. In this way, we get a sequence of chunks that can be decoded to a final answer - a collection of spans."
]
}
],
"annotation_id": [
"b9cb9e533523d40fc08fe9fe6f00405cae72353d"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "For single-span questions, the proposed LARGE-SQUAD improve performance of the MTMSNlarge baseline for 2.1 EM and 1.55 F1.\nFor number type question, MTMSNlarge baseline have improvement over LARGE-SQUAD for 3,11 EM and 2,98 F1. \nFor date question, LARGE-SQUAD have improvements in 2,02 EM but MTMSNlarge have improvement of 4,39 F1.",
"evidence": [
"FLOAT SELECTED: Table 2. Performance of different models on DROP’s development set in terms of Exact Match (EM) and F1."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 2. Performance of different models on DROP’s development set in terms of Exact Match (EM) and F1."
]
}
],
"annotation_id": [
"e361bbf537c1249359e6d7634f9e6488e688c131"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "The proposed model achieves EM 77,63 and F1 80,73 on the test and EM 76,95 and F1 80,25 on the dev",
"evidence": [
"Table TABREF25 shows the results on DROP's test set, with our model being the best overall as of the time of writing, and not just on multi-span questions.",
"FLOAT SELECTED: Table 3. Comparing test and development set results of models from the official DROP leaderboard"
],
"highlighted_evidence": [
"Table TABREF25 shows the results on DROP's test set, with our model being the best overall as of the time of writing, and not just on multi-span questions.",
"FLOAT SELECTED: Table 3. Comparing test and development set results of models from the official DROP leaderboard"
]
}
],
"annotation_id": [
"00d59243ba4b523fab5776695ac6ab22f0f5b8d0"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"MTMSN BIBREF4"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. Their approach consisted of two parts. The first was to train a dedicated categorical variable to predict the number of spans to extract. The second was to generalize the single-span head method of extracting a span, by utilizing the non-maximum suppression (NMS) algorithm BIBREF7 to find the most probable set of non-overlapping spans. The number of spans to extract was determined by the aforementioned categorical variable."
],
"highlighted_evidence": [
"MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. "
]
}
],
"annotation_id": [
"3ec8399148afa26c5b69d8d430c68cd413913834"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
} | {
"caption": [
"Table 1. Examples of faulty answers for multi-span questions in the training dataset, with their perfect clean answers, and answers generated by our cleaning method",
"Table 2. Performance of different models on DROP’s development set in terms of Exact Match (EM) and F1.",
"Table 3. Comparing test and development set results of models from the official DROP leaderboard",
"Table 4. Ablation tests results summary on DROP’s development set."
],
"file": [
"6-Table1-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"7-Table4-1.png"
]
} |
1909.00430 | Transfer Learning Between Related Tasks Using Expected Label Proportions | Deep learning systems thrive on abundance of labeled training data but such data is not always available, calling for alternative methods of supervision. One such method is expectation regularization (XR) (Mann and McCallum, 2007), where models are trained based on expected label proportions. We propose a novel application of the XR framework for transfer learning between related tasks, where knowing the labels of task A provides an estimation of the label proportion of task B. We then use a model trained for A to label a large corpus, and use this corpus with an XR loss to train a model for task B. To make the XR framework applicable to large-scale deep-learning setups, we propose a stochastic batched approximation procedure. We demonstrate the approach on the task of Aspect-based Sentiment classification, where we effectively use a sentence-level sentiment predictor to train accurate aspect-based predictor. The method improves upon fully supervised neural system trained on aspect-level data, and is also cumulative with LM-based pretraining, as we demonstrate by improving a BERT-based Aspect-based Sentiment model. | {
"section_name": [
"Introduction",
"Lightly Supervised Learning",
"Expectation Regularization (XR)",
"Aspect-based Sentiment Classification",
"Transfer-training between related tasks with XR",
"Stochastic Batched Training for Deep XR",
"Application to Aspect-based Sentiment",
"Relating the classification tasks",
"Classification Architecture",
"Main Results",
"Further experiments",
"Pre-training, Bert",
"Discussion",
"Acknowledgements"
],
"paragraphs": [
[
"Data annotation is a key bottleneck in many data driven algorithms. Specifically, deep learning models, which became a prominent tool in many data driven tasks in recent years, require large datasets to work well. However, many tasks require manual annotations which are relatively hard to obtain at scale. An attractive alternative is lightly supervised learning BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , in which the objective function is supplemented by a set of domain-specific soft-constraints over the model's predictions on unlabeled data. For example, in label regularization BIBREF0 the model is trained to fit the true label proportions of an unlabeled dataset. Label regularization is special case of expectation regularization (XR) BIBREF0 , in which the model is trained to fit the conditional probabilities of labels given features.",
"In this work we consider the case of correlated tasks, in the sense that knowing the labels for task A provides information on the expected label composition of task B. We demonstrate the approach using sentence-level and aspect-level sentiment analysis, which we use as a running example: knowing that a sentence has positive sentiment label (task A), we can expect that most aspects within this sentence (task B) will also have positive label. While this expectation may be noisy on the individual example level, it holds well in aggregate: given a set of positively-labeled sentences, we can robustly estimate the proportion of positively-labeled aspects within this set. For example, in a random set of positive sentences, we expect to find 90% positive aspects, while in a set of negative sentences, we expect to find 70% negative aspects. These proportions can be easily either guessed or estimated from a small set.",
"We propose a novel application of the XR framework for transfer learning in this setup. We present an algorithm (Sec SECREF12 ) that, given a corpus labeled for task A (sentence-level sentiment), learns a classifier for performing task B (aspect-level sentiment) instead, without a direct supervision signal for task B. We note that the label information for task A is only used at training time. Furthermore, due to the stochastic nature of the estimation, the task A labels need not be fully accurate, allowing us to make use of noisy predictions which are assigned by an automatic classifier (Sections SECREF12 and SECREF4 ). In other words, given a medium-sized sentiment corpus with sentence-level labels, and a large collection of un-annotated text from the same distribution, we can train an accurate aspect-level sentiment classifier.",
"The XR loss allows us to use task A labels for training task B predictors. This ability seamlessly integrates into other semi-supervised schemes: we can use the XR loss on top of a pre-trained model to fine-tune the pre-trained representation to the target task, and we can also take the model trained using XR loss and plentiful data and fine-tune it to the target task using the available small-scale annotated data. In Section SECREF56 we explore these options and show that our XR framework improves the results also when applied on top of a pre-trained Bert-based model BIBREF9 .",
"Finally, to make the XR framework applicable to large-scale deep-learning setups, we propose a stochastic batched approximation procedure (Section SECREF19 ). Source code is available at https://github.com/MatanBN/XRTransfer."
],
[
"An effective way to supplement small annotated datasets is to use lightly supervised learning, in which the objective function is supplemented by a set of domain-specific soft-constraints over the model's predictions on unlabeled data. Previous work in lightly-supervised learning focused on training classifiers by using prior knowledge of label proportions BIBREF2 , BIBREF3 , BIBREF10 , BIBREF0 , BIBREF11 , BIBREF12 , BIBREF7 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF8 or prior knowledge of features label associations BIBREF1 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . In the context of NLP, BIBREF17 suggested to use distributional similarities of words to train sequence models for part-of-speech tagging and a classified ads information extraction task. BIBREF19 used background lexical information in terms of word-class associations to train a sentiment classifier. BIBREF21 , BIBREF22 suggested to exploit the bilingual correlations between a resource rich language and a resource poor language to train a classifier for the resource poor language in a lightly supervised manner."
],
[
"Expectation Regularization (XR) BIBREF0 is a lightly supervised learning method, in which the model is trained to fit the conditional probabilities of labels given features. In the context of NLP, XR was used by BIBREF20 to train twitter-user attribute prediction using hundreds of noisy distributional expectations based on census demographics. Here, we suggest using XR to train a target task (aspect-level sentiment) based on the output of a related source-task classifier (sentence-level sentiment).",
"The main idea of XR is moving from a fully supervised situation in which each data-point INLINEFORM0 has an associated label INLINEFORM1 , to a setup in which sets of data points INLINEFORM2 are associated with corresponding label proportions INLINEFORM3 over that set.",
"Formally, let INLINEFORM0 be a set of data points, INLINEFORM1 be a set of INLINEFORM2 class labels, INLINEFORM3 be a set of sets where INLINEFORM4 for every INLINEFORM5 , and let INLINEFORM6 be the label distribution of set INLINEFORM7 . For example, INLINEFORM8 would indicate that 70% of data points in INLINEFORM9 are expected to have class 0, 20% are expected to have class 1 and 10% are expected to have class 2. Let INLINEFORM10 be a parameterized function with parameters INLINEFORM11 from INLINEFORM12 to a vector of conditional probabilities over labels in INLINEFORM13 . We write INLINEFORM14 to denote the probability assigned to the INLINEFORM15 th event (the conditional probability of INLINEFORM16 given INLINEFORM17 ).",
"A typically objective when training on fully labeled data of INLINEFORM0 pairs is to maximize likelihood of labeled data using the cross entropy loss, INLINEFORM1 ",
"Instead, in XR our data comes in the form of pairs INLINEFORM0 of sets and their corresponding expected label proportions, and we aim to optimize INLINEFORM1 to fit the label distribution INLINEFORM2 over INLINEFORM3 , for all INLINEFORM4 .",
"As counting the number of predicted class labels over a set INLINEFORM0 leads to a non-differentiable objective, BIBREF0 suggest to relax it and use instead the model's posterior distribution INLINEFORM1 over the set: DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 indicates the INLINEFORM1 th entry in INLINEFORM2 . Then, we would like to set INLINEFORM3 such that INLINEFORM4 and INLINEFORM5 are close. BIBREF0 suggest to use KL-divergence for this. KL-divergence is composed of two parts: INLINEFORM6 INLINEFORM7 ",
"Since INLINEFORM0 is constant, we only need to minimize INLINEFORM1 , therefore the loss function becomes: DISPLAYFORM0 ",
"Notice that computing INLINEFORM0 requires summation over INLINEFORM1 for the entire set INLINEFORM2 , which can be prohibitive. We present batched approximation (Section SECREF19 ) to overcome this.",
" BIBREF0 find that XR might find a degenerate solution. For example, in a three class classification task, where INLINEFORM0 , it might find a solution such that INLINEFORM1 for every instance, as a result, every instance will be classified the same. To avoid this, BIBREF0 suggest to penalize flat distributions by using a temperature coefficient T likewise: DISPLAYFORM0 ",
"Where z is a feature vector and W and b are the linear classifier parameters."
],
[
"In the aspect-based sentiment classification (ABSC) task, we are given a sentence and an aspect, and need to determine the sentiment that is expressed towards the aspect. For example the sentence “Excellent food, although the interior could use some help.“ has two aspects: food and interior, a positive sentiment is expressed about the food, but a negative sentiment is expressed about the interior. A sentence INLINEFORM0 , may contain 0 or more aspects INLINEFORM1 , where each aspect corresponds to a sub-sequence of the original sentence, and has an associated sentiment label (Neg, Pos, or Neu). Concretely, we follow the task definition in the SemEval-2015 and SemEval-2016 shared tasks BIBREF23 , BIBREF24 , in which the relevant aspects are given and the task focuses on finding the sentiment label of the aspects.",
"While sentence-level sentiment labels are relatively easy to obtain, aspect-level annotation are much more scarce, as demonstrated in the small datasets of the SemEval shared tasks."
],
[
"[t!] Inputs: A dataset INLINEFORM0 , batch size INLINEFORM1 , differentiable classifier INLINEFORM2 [H] not converged INLINEFORM3 random( INLINEFORM4 ) INLINEFORM5 random-choice( INLINEFORM6 , INLINEFORM7 ) INLINEFORM8 INLINEFORM9 INLINEFORM10 INLINEFORM11 Compute loss INLINEFORM12 (eq (4)) Compute gradients and update INLINEFORM13 INLINEFORM14 Stochastic Batched XR",
"Consider two classification tasks over a shared input space, a source task INLINEFORM0 from INLINEFORM1 to INLINEFORM2 and a target task INLINEFORM3 from INLINEFORM4 to INLINEFORM5 , which are related through a conditional distribution INLINEFORM6 . In other words, a labeling decision for task INLINEFORM7 induces an expected label distribution over the task INLINEFORM8 . For a set of datapoints INLINEFORM9 that share a source label INLINEFORM10 , we expect to see a target label distribution of INLINEFORM11 .",
"Given a large unlabeled dataset INLINEFORM0 , a small labeled dataset for the target task INLINEFORM1 , classifier INLINEFORM2 (or sufficient training data to train one) for the source task, we wish to use INLINEFORM3 and INLINEFORM4 to train a good classifier INLINEFORM5 for the target task. This can be achieved using the following procedure.",
"Apply INLINEFORM0 to INLINEFORM1 , resulting in a noisy source-side labels INLINEFORM2 for the target task.",
"Estimate the conditional probability INLINEFORM0 table using MLE estimates over INLINEFORM1 INLINEFORM2 ",
"where INLINEFORM0 is a counting function over INLINEFORM1 .",
"Apply INLINEFORM0 to the unlabeled data INLINEFORM1 resulting in labels INLINEFORM2 . Split INLINEFORM3 into INLINEFORM4 sets INLINEFORM5 according to the labeling induced by INLINEFORM6 : INLINEFORM7 ",
"Use Algorithm SECREF12 to train a classifier for the target task using input pairs INLINEFORM0 and the XR loss.",
"In words, by using XR training, we use the expected label proportions over the target task given predicted labels of the source task, to train a target-class classifier."
],
[
" BIBREF0 and following work take the base classifier INLINEFORM0 to be a logistic regression classifier, for which they manually derive gradients for the XR loss and train with LBFGs BIBREF25 . However, nothing precludes us from using an arbitrary neural network instead, as long as it culminates in a softmax layer.",
"One complicating factor is that the computation of INLINEFORM0 in equation ( EQREF5 ) requires a summation over INLINEFORM1 for the entire set INLINEFORM2 , which in our setup may contain hundreds of thousands of examples, making gradient computation and optimization impractical. We instead proposed a stochastic batched approximation in which, instead of requiring that the full constraint set INLINEFORM3 will match the expected label posterior distribution, we require that sufficiently large random subsets of it will match the distribution. At each training step we compute the loss and update the gradient with respect to a different random subset. Specifically, in each training step we sample a random pair INLINEFORM4 , sample a random subset INLINEFORM5 of INLINEFORM6 of size INLINEFORM7 , and compute the local XR loss of set INLINEFORM8 : DISPLAYFORM0 ",
"where INLINEFORM0 is computed by summing over the elements of INLINEFORM1 rather than of INLINEFORM2 in equations ( EQREF5 –2). The stochastic batched XR training algorithm is given in Algorithm SECREF12 . For large enough INLINEFORM3 , the expected label distribution of the subset is the same as that of the complete set."
],
[
"We demonstrate the procedure given above by training Aspect-based Sentiment Classifier (ABSC) using sentence-level sentiment signals."
],
[
"We observe that while the sentence-level sentiment does not determine the sentiment of individual aspects (a positive sentence may contain negative remarks about some aspects), it is very predictive of the proportion of sentiment labels of the fragments within a sentence. Positively labeled sentences are likely to have more positive aspects and fewer negative ones, and vice-versa for negatively-labeled sentences. While these proportions may vary on the individual sentence level, we expect them to be stable when aggregating fragments from several sentences: when considering a large enough sample of fragments that all come from positively labeled sentences, we expect the different samples to have roughly similar label proportions to each other. This situation is idealy suited for performing XR training, as described in section SECREF12 .",
"The application to ABSC is almost straightforward, but is complicated a bit by the decomposition of sentences into fragments: each sentence level decision now corresponds to multiple fragment-level decisions. Thus, we apply the sentence-level (task A) classifier INLINEFORM0 on the aspect-level corpus INLINEFORM1 by applying it on the sentence level and then associating the predicted sentence labels with each of the fragments, resulting in fragment-level labeling. Similarly, when we apply INLINEFORM2 to the unlabeled data INLINEFORM3 we again do it at the sentence level, but the sets INLINEFORM4 are composed of fragments, not sentences: INLINEFORM5 ",
"We then apply algorithm SECREF12 as is: at each step of training we sample a source label INLINEFORM0 Pos,Neg,Neu INLINEFORM1 , sample INLINEFORM2 fragments from INLINEFORM3 , and use the XR loss to fit the expected fragment-label proportions over these INLINEFORM4 fragments to INLINEFORM5 . Figure FIGREF21 illustrates the procedure."
],
[
"We model the ABSC problem by associating each (sentence,aspect) pair with a sentence-fragment, and constructing a neural classifier from fragments to sentiment labels. We heuristically decompose a sentence into fragments. We use the same BiLSTM based neural architecture for both sentence classification and fragment classification.",
"We now describe the procedure we use to associate a sentence fragment with each (sentence,aspect) pairs. The shared tasks data associates each aspect with a pivot-phrase INLINEFORM0 , where pivot phrase INLINEFORM1 is defined as a pre-determined sequence of words that is contained within the sentence. For a sentence INLINEFORM2 , a set of pivot phrases INLINEFORM3 and a specific pivot phrase INLINEFORM4 , we consult the constituency parse tree of INLINEFORM5 and look for tree nodes that satisfy the following conditions:",
"The node governs the desired pivot phrase INLINEFORM0 .",
"The node governs either a verb (VB, VBD, VBN, VBG, VBP, VBZ) or an adjective (JJ, JJR, JJS), which is different than any INLINEFORM0 .",
"The node governs a minimal number of pivot phrases from INLINEFORM0 , ideally only INLINEFORM1 .",
"We then select the highest node in the tree that satisfies all conditions. The span governed by this node is taken as the fragment associated with aspect INLINEFORM0 . The decomposition procedure is demonstrated in Figure FIGREF22 .",
"When aspect-level information is given, we take the pivot-phrases to be the requested aspects. When aspect-level information is not available, we take each noun in the sentence to be a pivot-phrase.",
"Our classification model is a simple 1-layer BiLSTM encoder (a concatenation of the last states of a forward and a backward running LSTMs) followed by a linear-predictor. The encoder is fed either a complete sentence or a sentence fragment."
],
[
"Table TABREF44 compares these baselines to three XR conditions.",
"The first condition, BiLSTM-XR-Dev, performs XR training on the automatically-labeled sentence-level dataset. The only access it has to aspect-level annotation is for estimating the proportions of labels for each sentence-level label, which is done based on the validation set of SemEval-2015 (i.e., 20% of the train set). The XR setting is very effective: without using any in-task data, this model already surpasses all other models, both supervised and semi-supervised, except for the BIBREF35 , BIBREF34 models which achieve higher F1 scores. We note that in contrast to XR, the competing models have complete access to the supervised aspect-based labels. The second condition, BiLSTM-XR, is similar but now the model is allowed to estimate the conditional label proportions based on the entire aspect-based training set (the classifier still does not have direct access to the labels beyond the aggregate proportion information). This improves results further, showing the importance of accurately estimating the proportions. Finally, in BiLSTM-XR+Finetuning, we follow the XR training with fully supervised fine-tuning on the small labeled dataset, using the attention-based model of BIBREF35 . This achieves the best results, and surpasses also the semi-supervised BIBREF35 baseline on accuracy, and matching it on F1.",
"We report significance tests for the robustness of the method under random parameter initialization. Our reported numbers are averaged over five random initialization. Since the datasets are unbalanced w.r.t the label distribution, we report both accuracy and macro-F1.",
"The XR training is also more stable than the other semi-supervised baselines, achieving substantially lower standard deviations across different runs."
],
[
"In each experiment in this section we estimate the proportions using the SemEval-2015 train set.",
"How does the XR training scale with the amount of unlabeled data? Figure FIGREF54 a shows the macro-F1 scores on the entire SemEval-2016 dataset, with different unlabeled corpus sizes (measured in number of sentences). An unannotated corpus of INLINEFORM0 sentences is sufficient to surpass the results of the INLINEFORM1 sentence-level trained classifier, and more unannotated data further improves the results.",
"Our method requires a sentence level classifier INLINEFORM0 to label both the target-task corpus and the unlabeled corpus. How does the quality of this classifier affect the overall XR training? We vary the amount of supervision used to train INLINEFORM1 from 0 sentences (assigning the same label to all sentences), to 100, 1000, 5000 and 10000 sentences. We again measure macro-F1 on the entire SemEval 2016 corpus.",
"The results in Figure FIGREF54 b show that when using the prior distributions of aspects (0), the model struggles to learn from this signal, it learns mostly to predict the majority class, and hence reaches very low F1 scores of 35.28. The more data given to the sentence level classifier, the better the potential results will be when training with our method using the classifier labels, with a classifiers trained on 100,1000,5000 and 10000 labeled sentences, we get a F1 scores of 53.81, 58.84, 61.81, 65.58 respectively. Improvements in the source task classifier's quality clearly contribute to the target task accuracy.",
"The Stochastic Batched XR algorithm (Algorithm SECREF12 ) samples a batch of INLINEFORM0 examples at each step to estimate the posterior label distribution used in the loss computation. How does the size of INLINEFORM1 affect the results? We use INLINEFORM2 fragments in our main experiments, but smaller values of INLINEFORM3 reduce GPU memory load and may train better in practice. We tested our method with varying values of INLINEFORM4 on a sample of INLINEFORM5 , using batches that are composed of fragments of 5, 25, 100, 450, 1000 and 4500 sentences. The results are shown in Figure FIGREF54 c. Setting INLINEFORM6 result in low scores. Setting INLINEFORM7 yields better F1 score but with high variance across runs. For INLINEFORM8 fragments the results begin to stabilize, we also see a slight decrease in F1-scores with larger batch sizes. We attribute this drop despite having better estimation of the gradients to the general trend of larger batch sizes being harder to train with stochastic gradient methods."
],
[
"The XR training can be performed also over pre-trained representations. We experiment with two pre-training methods: (1) pre-training by training the BiLSTM model to predict the noisy sentence-level predictions. (2) Using the pre-trained Bert representation BIBREF9 . For (1), we compare the effect of pre-train on unlabeled corpora of sizes of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 sentences. Results in Figure FIGREF54 d show that this form of pre-training is effective for smaller unlabeled corpora but evens out for larger ones.",
"For the Bert experiments, we experiment with the Bert-base model with INLINEFORM1 sets, 30 epochs for XR training or sentence level fine-tuning and 15 epochs for aspect based fine-tuning, on each training method we evaluated the model on the dev set after each epoch and the best model was chosen. We compare the following setups:",
"-Bert INLINEFORM0 Aspect Based Finetuning: pretrained bert model finetuned to the aspect based task.",
"-Bert INLINEFORM0 : A pretrained bert model finetuned to the sentence level task on the INLINEFORM1 sentences, and tested by predicting fragment-level sentiment.",
"-Bert INLINEFORM0 INLINEFORM1 INLINEFORM2 Aspect Based Finetuning: pretrained bert model finetuned to the sentence level task, and finetuned again to the aspect based one.",
"-Bert INLINEFORM0 XR: pretrained bert model followed by XR training using our method.",
"-Bert INLINEFORM0 XR INLINEFORM1 Aspect Based Finetuning: pretrained bert followed by XR training and then fine-tuned to the aspect level task.",
"The results are presented in Table TABREF55 . As before, aspect-based fine-tuning is beneficial for both SemEval-16 and SemEval-15. Training a BiLSTM with XR surpasses pre-trained bert models and using XR training on top of the pre-trained Bert models substantially increases the results even further."
],
[
"We presented a transfer learning method based on expectation regularization (XR), and demonstrated its effectiveness for training aspect-based sentiment classifiers using sentence-level supervision. The method achieves state-of-the-art results for the task, and is also effective for improving on top of a strong pre-trained Bert model. The proposed method provides an additional data-efficient tool in the modeling arsenal, which can be applied on its own or together with another training method, in situations where there is a conditional relations between the labels of a source task for which we have supervision, and a target task for which we don't.",
"While we demonstrated the approach on the sentiment domain, the required conditional dependence between task labels is present in many situations. Other possible application of the method includes training language identification of tweets given geo-location supervision (knowing the geographical region gives a prior on languages spoken), training predictors for renal failure from textual medical records given classifier for diabetes (there is a strong correlation between the two conditions), training a political affiliation classifier from social media tweets based on age-group classifiers, zip-code information, or social-status classifiers (there are known correlations between all of these to political affiliation), training hate-speech detection based on emotion detection, and so on."
],
[
"The work was supported in part by The Israeli Science Foundation (grant number 1555/15)."
]
]
} | {
"question": [
"How much more data does the model trained using XR loss have access to, compared to the fully supervised model?",
"Does the system trained only using XR loss outperform the fully supervised neural system?",
"How accurate is the aspect based sentiment classifier trained only using the XR loss?",
"How is the expectation regularization loss defined?"
],
"question_id": [
"547be35cff38028648d199ad39fb48236cfb99ee",
"47a30eb4d0d6f5f2ff4cdf6487265a25c1b18fd8",
"e42fbf6c183abf1c6c2321957359c7683122b48e",
"e574f0f733fb98ecef3c64044004aa7a320439be"
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
""
],
"question_writer": [
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668"
],
"answers": [
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"8f217f179202ac3fbdd22ceb878a60b4ca2b14c8"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"FLOAT SELECTED: Table 1: Average accuracies and Macro-F1 scores over five runs with random initialization along with their standard deviations. Bold: best results or within std of them. ∗ indicates that the method’s result is significantly better than all baseline methods, † indicates that the method’s result is significantly better than all baselines methods that use the aspect-based data only, with p < 0.05 according to a one-tailed unpaired t-test. The data annotations S, N and A indicate training with Sentence-level, Noisy sentence-level and Aspect-level data respectively. Numbers for TDLSTM+Att,ATAE-LSTM,MM,RAM and LSTM+SynATT+TarRep are from (He et al., 2018a). Numbers for Semisupervised are from (He et al., 2018b)."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Average accuracies and Macro-F1 scores over five runs with random initialization along with their standard deviations. Bold: best results or within std of them. ∗ indicates that the method’s result is significantly better than all baseline methods, † indicates that the method’s result is significantly better than all baselines methods that use the aspect-based data only, with p < 0.05 according to a one-tailed unpaired t-test. The data annotations S, N and A indicate training with Sentence-level, Noisy sentence-level and Aspect-level data respectively. Numbers for TDLSTM+Att,ATAE-LSTM,MM,RAM and LSTM+SynATT+TarRep are from (He et al., 2018a). Numbers for Semisupervised are from (He et al., 2018b)."
]
}
],
"annotation_id": [
"c4972dbb4595bf72a99bc4fc9e530d5cc07683ff"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "BiLSTM-XR-Dev Estimation accuracy is 83.31 for SemEval-15 and 87.68 for SemEval-16.\nBiLSTM-XR accuracy is 83.31 for SemEval-15 and 88.12 for SemEval-16.\n",
"evidence": [
"FLOAT SELECTED: Table 1: Average accuracies and Macro-F1 scores over five runs with random initialization along with their standard deviations. Bold: best results or within std of them. ∗ indicates that the method’s result is significantly better than all baseline methods, † indicates that the method’s result is significantly better than all baselines methods that use the aspect-based data only, with p < 0.05 according to a one-tailed unpaired t-test. The data annotations S, N and A indicate training with Sentence-level, Noisy sentence-level and Aspect-level data respectively. Numbers for TDLSTM+Att,ATAE-LSTM,MM,RAM and LSTM+SynATT+TarRep are from (He et al., 2018a). Numbers for Semisupervised are from (He et al., 2018b)."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Average accuracies and Macro-F1 scores over five runs with random initialization along with their standard deviations. Bold: best results or within std of them. ∗ indicates that the method’s result is significantly better than all baseline methods, † indicates that the method’s result is significantly better than all baselines methods that use the aspect-based data only, with p < 0.05 according to a one-tailed unpaired t-test. The data annotations S, N and A indicate training with Sentence-level, Noisy sentence-level and Aspect-level data respectively. Numbers for TDLSTM+Att,ATAE-LSTM,MM,RAM and LSTM+SynATT+TarRep are from (He et al., 2018a). Numbers for Semisupervised are from (He et al., 2018b)."
]
}
],
"annotation_id": [
"caedefe56dedd1f6fa029b6f8ee71fab6a65f1c5"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"DISPLAYFORM0"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Since INLINEFORM0 is constant, we only need to minimize INLINEFORM1 , therefore the loss function becomes: DISPLAYFORM0"
],
"highlighted_evidence": [
"Since INLINEFORM0 is constant, we only need to minimize INLINEFORM1 , therefore the loss function becomes: DISPLAYFORM0"
]
}
],
"annotation_id": [
"0109c97a8e3ec8291b6dadbba5e09ce4a13b13be"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
} | {
"caption": [
"Figure 1: Illustration of the algorithm. Cs is applied to Du resulting in ỹ for each sentence, Uj is built according with the fragments of the same labelled sentences, the probabilities for each fragment in Uj are summed and normalized, the XR loss in equation (4) is calculated and the network is updated.",
"Figure 2: Illustration of the decomposition procedure, when given a1=“duck confit“ and a2= “foie gras terrine with figs“ as the pivot phrases.",
"Table 1: Average accuracies and Macro-F1 scores over five runs with random initialization along with their standard deviations. Bold: best results or within std of them. ∗ indicates that the method’s result is significantly better than all baseline methods, † indicates that the method’s result is significantly better than all baselines methods that use the aspect-based data only, with p < 0.05 according to a one-tailed unpaired t-test. The data annotations S, N and A indicate training with Sentence-level, Noisy sentence-level and Aspect-level data respectively. Numbers for TDLSTM+Att,ATAE-LSTM,MM,RAM and LSTM+SynATT+TarRep are from (He et al., 2018a). Numbers for Semisupervised are from (He et al., 2018b).",
"Figure 3: Macro-F1 scores for the entire SemEval-2016 dataset of the different analyses. (a) the contribution of unlabeled data. (b) the effect of sentence classifier quality. (c) the effect of k. (d) the effect of sentence-level pretraining vs. corpus size.",
"Table 2: BERT pre-training: average accuracies and Macro-F1 scores from five runs and their stdev. ∗ indicates that the method’s result is significantly better than all baseline methods, † indicates that the method’s result is significantly better than all non XR baseline methods, with p < 0.05 according to a one-tailed unpaired t-test. The data annotations S, N and A indicate training with Sentence-level, Noisy sentence-level and Aspect-level data respectively."
],
"file": [
"5-Figure1-1.png",
"5-Figure2-1.png",
"7-Table1-1.png",
"9-Figure3-1.png",
"9-Table2-1.png"
]
} |
1910.11493 | The SIGMORPHON 2019 Shared Task: Morphological Analysis in Context and Cross-Lingual Transfer for Inflection | The SIGMORPHON 2019 shared task on cross-lingual transfer and contextual analysis in morphology examined transfer learning of inflection between 100 language pairs, as well as contextual lemmatization and morphosyntactic description in 66 languages. The first task evolves past years' inflection tasks by examining transfer of morphological inflection knowledge from a high-resource language to a low-resource language. This year also presents a new second challenge on lemmatization and morphological feature analysis in context. All submissions featured a neural component and built on either this year's strong baselines or highly ranked systems from previous years' shared tasks. Every participating team improved in accuracy over the baselines for the inflection task (though not Levenshtein distance), and every team in the contextual analysis task improved on both state-of-the-art neural and non-neural baselines. | {
"section_name": [
"Introduction",
"Tasks and Evaluation ::: Task 1: Cross-lingual transfer for morphological inflection",
"Tasks and Evaluation ::: Task 1: Cross-lingual transfer for morphological inflection ::: Example",
"Tasks and Evaluation ::: Task 1: Cross-lingual transfer for morphological inflection ::: Evaluation",
"Tasks and Evaluation ::: Task 2: Morphological analysis in context",
"Data ::: Data for Task 1 ::: Language pairs",
"Data ::: Data for Task 1 ::: Data format",
"Data ::: Data for Task 1 ::: Extraction from Wiktionary",
"Data ::: Data for Task 1 ::: Sampling data splits",
"Data ::: Data for Task 1 ::: Other modifications",
"Data ::: Data for Task 2",
"Data ::: Data for Task 2 ::: Data conversion",
"Baselines ::: Task 1 Baseline",
"Baselines ::: Task 2 Baselines ::: Non-neural",
"Baselines ::: Task 2 Baselines ::: Neural",
"Results",
"Results ::: Task 1 Results",
"Results ::: Task 2 Results",
"Future Directions",
"Conclusions",
"Acknowledgments"
],
"paragraphs": [
[
"While producing a sentence, humans combine various types of knowledge to produce fluent output—various shades of meaning are expressed through word selection and tone, while the language is made to conform to underlying structural rules via syntax and morphology. Native speakers are often quick to identify disfluency, even if the meaning of a sentence is mostly clear.",
"Automatic systems must also consider these constraints when constructing or processing language. Strong enough language models can often reconstruct common syntactic structures, but are insufficient to properly model morphology. Many languages implement large inflectional paradigms that mark both function and content words with a varying levels of morphosyntactic information. For instance, Romanian verb forms inflect for person, number, tense, mood, and voice; meanwhile, Archi verbs can take on thousands of forms BIBREF0. Such complex paradigms produce large inventories of words, all of which must be producible by a realistic system, even though a large percentage of them will never be observed over billions of lines of linguistic input. Compounding the issue, good inflectional systems often require large amounts of supervised training data, which is infeasible in many of the world's languages.",
"This year's shared task is concentrated on encouraging the construction of strong morphological systems that perform two related but different inflectional tasks. The first task asks participants to create morphological inflectors for a large number of under-resourced languages, encouraging systems that use highly-resourced, related languages as a cross-lingual training signal. The second task welcomes submissions that invert this operation in light of contextual information: Given an unannotated sentence, lemmatize each word, and tag them with a morphosyntactic description. Both of these tasks extend upon previous morphological competitions, and the best submitted systems now represent the state of the art in their respective tasks."
],
[
"Annotated resources for the world's languages are not distributed equally—some languages simply have more as they have more native speakers willing and able to annotate more data. We explore how to transfer knowledge from high-resource languages that are genetically related to low-resource languages.",
"The first task iterates on last year's main task: morphological inflection BIBREF1. Instead of giving some number of training examples in the language of interest, we provided only a limited number in that language. To accompany it, we provided a larger number of examples in either a related or unrelated language. Each test example asked participants to produce some other inflected form when given a lemma and a bundle of morphosyntactic features as input. The goal, thus, is to perform morphological inflection in the low-resource language, having hopefully exploited some similarity to the high-resource language. Models which perform well here can aid downstream tasks like machine translation in low-resource settings. All datasets were resampled from UniMorph, which makes them distinct from past years.",
"The mode of the task is inspired by BIBREF2, who fine-tune a model pre-trained on a high-resource language to perform well on a low-resource language. We do not, though, require that models be trained by fine-tuning. Joint modeling or any number of methods may be explored instead."
],
[
"The model will have access to type-level data in a low-resource target language, plus a high-resource source language. We give an example here of Asturian as the target language with Spanish as the source language.",
""
],
[
"We score the output of each system in terms of its predictions' exact-match accuracy and the average Levenshtein distance between the predictions and their corresponding true forms."
],
[
"Although inflection of words in a context-agnostic manner is a useful evaluation of the morphological quality of a system, people do not learn morphology in isolation.",
"In 2018, the second task of the CoNLL–SIGMORPHON Shared Task BIBREF1 required submitting systems to complete an inflectional cloze task BIBREF3 given only the sentential context and the desired lemma – an example of the problem is given in the following lines: A successful system would predict the plural form “dogs”. Likewise, a Spanish word form ayuda may be a feminine noun or a third-person verb form, which must be disambiguated by context.",
"",
"This year's task extends the second task from last year. Rather than inflect a single word in context, the task is to provide a complete morphological tagging of a sentence: for each word, a successful system will need to lemmatize and tag it with a morphsyntactic description (MSD).",
"width=",
"Context is critical—depending on the sentence, identical word forms realize a large number of potential inflectional categories, which will in turn influence lemmatization decisions. If the sentence were instead “The barking dogs kept us up all night”, “barking” is now an adjective, and its lemma is also “barking”."
],
[
"We presented data in 100 language pairs spanning 79 unique languages. Data for all but four languages (Basque, Kurmanji, Murrinhpatha, and Sorani) are extracted from English Wiktionary, a large multi-lingual crowd-sourced dictionary with morphological paradigms for many lemmata. 20 of the 100 language pairs are either distantly related or unrelated; this allows speculation into the relative importance of data quantity and linguistic relatedness."
],
[
"For each language, the basic data consists of triples of the form (lemma, feature bundle, inflected form), as in tab:sub1data. The first feature in the bundle always specifies the core part of speech (e.g., verb). For each language pair, separate files contain the high- and low-resource training examples.",
"All features in the bundle are coded according to the UniMorph Schema, a cross-linguistically consistent universal morphological feature set BIBREF8, BIBREF9."
],
[
"For each of the Wiktionary languages, Wiktionary provides a number of tables, each of which specifies the full inflectional paradigm for a particular lemma. As in the previous iteration, tables were extracted using a template annotation procedure described in BIBREF10."
],
[
"From each language's collection of paradigms, we sampled the training, development, and test sets as in 2018. Crucially, while the data were sampled in the same fashion, the datasets are distinct from those used for the 2018 shared task.",
"Our first step was to construct probability distributions over the (lemma, feature bundle, inflected form) triples in our full dataset. For each triple, we counted how many tokens the inflected form has in the February 2017 dump of Wikipedia for that language. To distribute the counts of an observed form over all the triples that have this token as its form, we follow the method used in the previous shared task BIBREF1, training a neural network on unambiguous forms to estimate the distribution over all, even ambiguous, forms. We then sampled 12,000 triples without replacement from this distribution. The first 100 were taken as training data for low-resource settings. The first 10,000 were used as high-resource training sets. As these sets are nested, the highest-count triples tend to appear in the smaller training sets.",
"The final 2000 triples were randomly shuffled and then split in half to obtain development and test sets of 1000 forms each. The final shuffling was performed to ensure that the development set is similar to the test set. By contrast, the development and test sets tend to contain lower-count triples than the training set."
],
[
"We further adopted some changes to increase compatibility. Namely, we corrected some annotation errors created while scraping Wiktionary for the 2018 task, and we standardized Romanian t-cedilla and t-comma to t-comma. (The same was done with s-cedilla and s-comma.)"
],
[
"Our data for task 2 come from the Universal Dependencies treebanks BIBREF11, which provides pre-defined training, development, and test splits and annotations in a unified annotation schema for morphosyntax and dependency relationships. Unlike the 2018 cloze task which used UD data, we require no manual data preparation and are able to leverage all 107 monolingual treebanks. As is typical, data are presented in CoNLL-U format, although we modify the morphological feature and lemma fields."
],
[
"The morphological annotations for the 2019 shared task were converted to the UniMorph schema BIBREF10 according to BIBREF12, who provide a deterministic mapping that increases agreement across languages. This also moves the part of speech into the bundle of morphological features. We do not attempt to individually correct any errors in the UD source material. Further, some languages received additional pre-processing. In the Finnish data, we removed morpheme boundaries that were present in the lemmata (e.g., puhe#kieli $\\mapsto $ puhekieli `spoken+language'). Russian lemmata in the GSD treebank were presented in all uppercase; to match the 2018 shared task, we lowercased these. In development and test data, all fields except for form and index within the sentence were struck."
],
[
"We include four neural sequence-to-sequence models mapping lemma into inflected word forms: soft attention BIBREF13, non-monotonic hard attention BIBREF14, monotonic hard attention and a variant with offset-based transition distribution BIBREF15. Neural sequence-to-sequence models with soft attention BIBREF13 have dominated previous SIGMORPHON shared tasks BIBREF16. BIBREF14 instead models the alignment between characters in the lemma and the inflected word form explicitly with hard attention and learns this alignment and transduction jointly. BIBREF15 shows that enforcing strict monotonicity with hard attention is beneficial in tasks such as morphological inflection where the transduction is mostly monotonic. The encoder is a biLSTM while the decoder is a left-to-right LSTM. All models use multiplicative attention and have roughly the same number of parameters. In the model, a morphological tag is fed to the decoder along with target character embeddings to guide the decoding. During the training of the hard attention model, dynamic programming is applied to marginalize all latent alignments exactly."
],
[
"BIBREF17: The Lemming model is a log-linear model that performs joint morphological tagging and lemmatization. The model is globally normalized with the use of a second order linear-chain CRF. To efficiently calculate the partition function, the choice of lemmata are pruned with the use of pre-extracted edit trees."
],
[
"BIBREF18: This is a state-of-the-art neural model that also performs joint morphological tagging and lemmatization, but also accounts for the exposure bias with the application of maximum likelihood (MLE). The model stitches the tagger and lemmatizer together with the use of jackknifing BIBREF19 to expose the lemmatizer to the errors made by the tagger model during training. The morphological tagger is based on a character-level biLSTM embedder that produces the embedding for a word, and a word-level biLSTM tagger that predicts a morphological tag sequence for each word in the sentence. The lemmatizer is a neural sequence-to-sequence model BIBREF15 that uses the decoded morphological tag sequence from the tagger as an additional attribute. The model uses hard monotonic attention instead of standard soft attention, along with a dynamic programming based training scheme."
],
[
"The SIGMORPHON 2019 shared task received 30 submissions—14 for task 1 and 16 for task 2—from 23 teams. In addition, the organizers' baseline systems were evaluated."
],
[
"Five teams participated in the first Task, with a variety of methods aimed at leveraging the cross-lingual data to improve system performance.",
"The University of Alberta (UAlberta) performed a focused investigation on four language pairs, training cognate-projection systems from external cognate lists. Two methods were considered: one which trained a high-resource neural encoder-decoder, and projected the test data into the HRL, and one that projected the HRL data into the LRL, and trained a combined system. Results demonstrated that certain language pairs may be amenable to such methods.",
"The Tuebingen University submission (Tuebingen) aligned source and target to learn a set of edit-actions with both linear and neural classifiers that independently learned to predict action sequences for each morphological category. Adding in the cross-lingual data only led to modest gains.",
"AX-Semantics combined the low- and high-resource data to train an encoder-decoder seq2seq model; optionally also implementing domain adaptation methods to focus later epochs on the target language.",
"The CMU submission first attends over a decoupled representation of the desired morphological sequence before using the updated decoder state to attend over the character sequence of the lemma. Secondly, in order to reduce the bias of the decoder's language model, they hallucinate two types of data that encourage common affixes and character copying. Simply allowing the model to learn to copy characters for several epochs significantly out-performs the task baseline, while further improvements are obtained through fine-tuning. Making use of an adversarial language discriminator, cross lingual gains are highly-correlated to linguistic similarity, while augmenting the data with hallucinated forms and multiple related target language further improves the model.",
"The system from IT-IST also attends separately to tags and lemmas, using a gating mechanism to interpolate the importance of the individual attentions. By combining the gated dual-head attention with a SparseMax activation function, they are able to jointly learn stem and affix modifications, improving significantly over the baseline system.",
"The relative system performance is described in tab:sub2team, which shows the average per-language accuracy of each system. The table reflects the fact that some teams submitted more than one system (e.g. Tuebingen-1 & Tuebingen-2 in the table)."
],
[
"Nine teams submitted system papers for Task 2, with several interesting modifications to either the baseline or other prior work that led to modest improvements.",
"Charles-Saarland achieved the highest overall tagging accuracy by leveraging multi-lingual BERT embeddings fine-tuned on a concatenation of all available languages, effectively transporting the cross-lingual objective of Task 1 into Task 2. Lemmas and tags are decoded separately (with a joint encoder and separate attention); Lemmas are a sequence of edit-actions, while tags are calculated jointly. (There is no splitting of tags into features; tags are atomic.)",
"CBNU instead lemmatize using a transformer network, while performing tagging with a multilayer perceptron with biaffine attention. Input words are first lemmatized, and then pipelined to the tagger, which produces atomic tag sequences (i.e., no splitting of features).",
"The team from Istanbul Technical University (ITU) jointly produces lemmatic edit-actions and morphological tags via a two level encoder (first word embeddings, and then context embeddings) and separate decoders. Their system slightly improves over the baseline lemmatization, but significantly improves tagging accuracy.",
"The team from the University of Groningen (RUG) also uses separate decoders for lemmatization and tagging, but uses ELMo to initialize the contextual embeddings, leading to large gains in performance. Furthermore, joint training on related languages further improves results.",
"CMU approaches tagging differently than the multi-task decoding we've seen so far (baseline is used for lemmatization). Making use of a hierarchical CRF that first predicts POS (that is subsequently looped back into the encoder), they then seek to predict each feature separately. In particular, predicting POS separately greatly improves results. An attempt to leverage gold typological information led to little gain in the results; experiments suggest that the system is already learning the pertinent information.",
"The team from Ohio State University (OHIOSTATE) concentrates on predicting tags; the baseline lemmatizer is used for lemmatization. To that end, they make use of a dual decoder that first predicts features given only the word embedding as input; the predictions are fed to a GRU seq2seq, which then predicts the sequence of tags.",
"The UNT HiLT+Ling team investigates a low-resource setting of the tagging, by using parallel Bible data to learn a translation matrix between English and the target language, learning morphological tags through analogy with English.",
"The UFAL-Prague team extends their submission from the UD shared task (multi-layer LSTM), replacing the pretrained embeddings with BERT, to great success (first in lemmatization, 2nd in tagging). Although they predict complete tags, they use the individual features to regularize the decoder. Small gains are also obtained from joining multi-lingual corpora and ensembling.",
"CUNI–Malta performs lemmatization as operations over edit actions with LSTM and ReLU. Tagging is a bidirectional LSTM augmented by the edit actions (i.e., two-stage decoding), predicting features separately.",
"The Edinburgh system is a character-based LSTM encoder-decoder with attention, implemented in OpenNMT. It can be seen as an extension of the contextual lemmatization system Lematus BIBREF20 to include morphological tagging, or alternatively as an adaptation of the morphological re-inflection system MED BIBREF21 to incorporate context and perform analysis rather than re-inflection. Like these systems it uses a completely generic encoder-decoder architecture with no specific adaptation to the morphological processing task other than the form of the input. In the submitted version of the system, the input is split into short chunks corresponding to the target word plus one word of context on either side, and the system is trained to output the corresponding lemmas and tags for each three-word chunk.",
"Several teams relied on external resources to improve their lemmatization and feature analysis. Several teams made use of pre-trained embeddings. CHARLES-SAARLAND-2 and UFALPRAGUE-1 used pretrained contextual embeddings (BERT) provided by Google BIBREF22. CBNU-1 used a mix of pre-trained embeddings from the CoNLL 2017 shared task and fastText. Further, some teams trained their own embeddings to aid performance."
],
[
"In general, the application of typology to natural language processing BIBREF23, BIBREF24 provides an interesting avenue for multilinguality. Further, our shared task was designed to only leverage a single helper language, though many may exist with lexical or morphological overlap with the target language. Techniques like those of BIBREF25 may aid in designing universal inflection architectures. Neither task this year included unannotated monolingual corpora. Using such data is well-motivated from an L1-learning point of view, and may affect the performance of low-resource data settings.",
"In the case of inflection an interesting future topic could involve departing from orthographic representation and using more IPA-like representations, i.e. transductions over pronunciations. Different languages, in particular those with idiosyncratic orthographies, may offer new challenges in this respect.",
"Only one team tried to learn inflection in a multilingual setting—i.e. to use all training data to train one model. Such transfer learning is an interesting avenue of future research, but evaluation could be difficult. Whether any cross-language transfer is actually being learned vs. whether having more data better biases the networks to copy strings is an evaluation step to disentangle.",
"Creating new data sets that accurately reflect learner exposure (whether L1 or L2) is also an important consideration in the design of future shared tasks. One pertinent facet of this is information about inflectional categories—often the inflectional information is insufficiently prescribed by the lemma, as with the Romanian verbal inflection classes or nominal gender in German.",
"As we move toward multilingual models for morphology, it becomes important to understand which representations are critical or irrelevant for adapting to new languages; this may be probed in the style of BIBREF27, and it can be used as a first step toward designing systems that avoid catastrophic forgetting as they learn to inflect new languages BIBREF28.",
"Future directions for Task 2 include exploring cross-lingual analysis—in stride with both Task 1 and BIBREF29—and leveraging these analyses in downstream tasks."
],
[
"The SIGMORPHON 2019 shared task provided a type-level evaluation on 100 language pairs in 79 languages and a token-level evaluation on 107 treebanks in 66 languages, of systems for inflection and analysis. On task 1 (low-resource inflection with cross-lingual transfer), 14 systems were submitted, while on task 2 (lemmatization and morphological feature analysis), 16 systems were submitted. All used neural network models, completing a trend in past years' shared tasks and other recent work on morphology.",
"In task 1, gains from cross-lingual training were generally modest, with gains positively correlating with the linguistic similarity of the two languages.",
"In the second task, several methods were implemented by multiple groups, with the most successful systems implementing variations of multi-headed attention, multi-level encoding, multiple decoders, and ELMo and BERT contextual embeddings.",
"We have released the training, development, and test sets, and expect these datasets to provide a useful benchmark for future research into learning of inflectional morphology and string-to-string transduction."
],
[
"MS has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 771113)."
]
]
} | {
"question": [
"What were the non-neural baselines used for the task?"
],
"question_id": [
"b65b1c366c8bcf544f1be5710ae1efc6d2b1e2f1"
],
"nlp_background": [
"two"
],
"topic_background": [
"unfamiliar"
],
"paper_read": [
"no"
],
"search_query": [
"morphology"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "The Lemming model in BIBREF17",
"evidence": [
"BIBREF17: The Lemming model is a log-linear model that performs joint morphological tagging and lemmatization. The model is globally normalized with the use of a second order linear-chain CRF. To efficiently calculate the partition function, the choice of lemmata are pruned with the use of pre-extracted edit trees."
],
"highlighted_evidence": [
"BIBREF17: The Lemming model is a log-linear model that performs joint morphological tagging and lemmatization. "
]
}
],
"annotation_id": [
"012a77e1bbdaa410ad83a28a87526db74bd1e353"
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
}
]
} | {
"caption": [
"Table 1: Sample language pair and data format for Task 1",
"Table 2: Task 1 Team Scores, averaged across all Languages; * indicates submissions were only applied to a subset of languages, making scores incomparable. † indicates that additional resources were used for training.",
"Table 3: Task 1 Accuracy scores",
"Table 4: Task 1 Levenshtein scores",
"Table 5: Task 2 Team Scores, averaged across all treebanks; * indicates submissions were only applied to a subset of languages, making scores incomparable. † indicates that additional external resources were used for training, and ‡ indicates that training data were shared across languages or treebanks.",
"Table 6: Task 2 Lemma Accuracy scores",
"Table 7: Task 2 Lemma Levenshtein scores"
],
"file": [
"2-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"8-Table5-1.png",
"9-Table6-1.png",
"10-Table7-1.png"
]
} |
1910.00912 | Hierarchical Multi-Task Natural Language Understanding for Cross-domain Conversational AI: HERMIT NLU | We present a new neural architecture for wide-coverage Natural Language Understanding in Spoken Dialogue Systems. We develop a hierarchical multi-task architecture, which delivers a multi-layer representation of sentence meaning (i.e., Dialogue Acts and Frame-like structures). The architecture is a hierarchy of self-attention mechanisms and BiLSTM encoders followed by CRF tagging layers. We describe a variety of experiments, showing that our approach obtains promising results on a dataset annotated with Dialogue Acts and Frame Semantics. Moreover, we demonstrate its applicability to a different, publicly available NLU dataset annotated with domain-specific intents and corresponding semantic roles, providing overall performance higher than state-of-the-art tools such as RASA, Dialogflow, LUIS, and Watson. For example, we show an average 4.45% improvement in entity tagging F-score over Rasa, Dialogflow and LUIS. | {
"section_name": [
"Introduction",
"Introduction ::: Cross-domain NLU",
"Introduction ::: Multi-task NLU",
"Introduction ::: Multi-dialogue act and -intent NLU",
"Related Work",
"Jointly parsing dialogue acts and frame-like structures",
"Jointly parsing dialogue acts and frame-like structures ::: Architecture description",
"Experimental Evaluation",
"Experimental Evaluation ::: Datasets",
"Experimental Evaluation ::: Datasets ::: NLU-Benchmark dataset",
"Experimental Evaluation ::: Datasets ::: ROMULUS dataset",
"Experimental Evaluation ::: Experimental setup",
"Experimental Evaluation ::: Experiments on the NLU-Benchmark",
"Experimental Evaluation ::: Experiments on the NLU-Benchmark ::: Ablation study",
"Experimental Evaluation ::: Experiments on the ROMULUS dataset",
"Experimental Evaluation ::: Discussion",
"Future Work",
"Conclusion",
"Acknowledgement"
],
"paragraphs": [
[
"Research in Conversational AI (also known as Spoken Dialogue Systems) has applications ranging from home devices to robotics, and has a growing presence in industry. A key problem in real-world Dialogue Systems is Natural Language Understanding (NLU) – the process of extracting structured representations of meaning from user utterances. In fact, the effective extraction of semantics is an essential feature, being the entry point of any Natural Language interaction system. Apart from challenges given by the inherent complexity and ambiguity of human language, other challenges arise whenever the NLU has to operate over multiple domains. In fact, interaction patterns, domain, and language vary depending on the device the user is interacting with. For example, chit-chatting and instruction-giving for executing an action are different processes in terms of language, domain, syntax and interaction schemes involved. And what if the user combines two interaction domains: “play some music, but first what's the weather tomorrow”?",
"In this work, we present HERMIT, a HiERarchical MultI-Task Natural Language Understanding architecture, designed for effective semantic parsing of domain-independent user utterances, extracting meaning representations in terms of high-level intents and frame-like semantic structures. With respect to previous approaches to NLU for SDS, HERMIT stands out for being a cross-domain, multi-task architecture, capable of recognising multiple intents/frames in an utterance. HERMIT also shows better performance with respect to current state-of-the-art commercial systems. Such a novel combination of requirements is discussed below."
],
[
"A cross-domain dialogue agent must be able to handle heterogeneous types of conversation, such as chit-chatting, giving directions, entertaining, and triggering domain/task actions. A domain-independent and rich meaning representation is thus required to properly capture the intent of the user. Meaning is modelled here through three layers of knowledge: dialogue acts, frames, and frame arguments. Frames and arguments can be in turn mapped to domain-dependent intents and slots, or to Frame Semantics' BIBREF0 structures (i.e. semantic frames and frame elements, respectively), which allow handling of heterogeneous domains and language."
],
[
"Deriving such a multi-layered meaning representation can be approached through a multi-task learning approach. Multi-task learning has found success in several NLP problems BIBREF1, BIBREF2, especially with the recent rise of Deep Learning. Thanks to the possibility of building complex networks, handling more tasks at once has been proven to be a successful solution, provided that some degree of dependence holds between the tasks. Moreover, multi-task learning allows the use of different datasets to train sub-parts of the network BIBREF3. Following the same trend, HERMIT is a hierarchical multi-task neural architecture which is able to deal with the three tasks of tagging dialogue acts, frame-like structures, and their arguments in parallel. The network, based on self-attention mechanisms, seq2seq bi-directional Long-Short Term Memory (BiLSTM) encoders, and CRF tagging layers, is hierarchical in the sense that information output from earlier layers flows through the network, feeding following layers to solve downstream dependent tasks."
],
[
"Another degree of complexity in NLU is represented by the granularity of knowledge that can be extracted from an utterance. Utterance semantics is often rich and expressive: approximating meaning to a single user intent is often not enough to convey the required information. As opposed to the traditional single-dialogue act and single-intent view in previous work BIBREF4, BIBREF5, BIBREF6, HERMIT operates on a meaning representation that is multi-dialogue act and multi-intent. In fact, it is possible to model an utterance's meaning through multiple dialogue acts and intents at the same time. For example, the user would be able both to request tomorrow's weather and listen to his/her favourite music with just a single utterance.",
"A further requirement is that for practical application the system should be competitive with state-of-the-art: we evaluate HERMIT's effectiveness by running several empirical investigations. We perform a robust test on a publicly available NLU-Benchmark (NLU-BM) BIBREF7 containing 25K cross-domain utterances with a conversational agent. The results obtained show a performance higher than well-known off-the-shelf tools (i.e., Rasa, DialogueFlow, LUIS, and Watson). The contribution of the different network components is then highlighted through an ablation study. We also test HERMIT on the smaller Robotics-Oriented MUltitask Language UnderStanding (ROMULUS) corpus, annotated with Dialogue Acts and Frame Semantics. HERMIT produces promising results for the application in a real scenario."
],
[
"Much research on Natural (or Spoken, depending on the input) Language Understanding has been carried out in the area of Spoken Dialogue Systems BIBREF8, where the advent of statistical learning has led to the application of many data-driven approaches BIBREF9. In recent years, the rise of deep learning models has further improved the state-of-the-art. Recurrent Neural Networks (RNNs) have proven to be particularly successful, especially uni- and bi-directional LSTMs and Gated Recurrent Units (GRUs). The use of such deep architectures has also fostered the development of joint classification models of intents and slots. Bi-directional GRUs are applied in BIBREF10, where the hidden state of each time step is used for slot tagging in a seq2seq fashion, while the final state of the GRU is used for intent classification. The application of attention mechanisms in a BiLSTM architecture is investigated in BIBREF5, while the work of BIBREF11 explores the use of memory networks BIBREF12 to exploit encoding of historical user utterances to improve the slot-filling task. Seq2seq with self-attention is applied in BIBREF13, where the classified intent is also used to guide a special gated unit that contributes to the slot classification of each token.",
"One of the first attempts to jointly detect domains in addition to intent-slot tagging is the work of BIBREF4. An utterance syntax is encoded through a Recursive NN, and it is used to predict the joined domain-intent classes. Syntactic features extracted from the same network are used in the per-word slot classifier. The work of BIBREF6 applies the same idea of BIBREF10, this time using a context-augmented BiLSTM, and performing domain-intent classification as a single joint task. As in BIBREF11, the history of user utterances is also considered in BIBREF14, in combination with a dialogue context encoder. A two-layer hierarchical structure made of a combination of BiLSTM and BiGRU is used for joint classification of domains and intents, together with slot tagging. BIBREF15 apply multi-task learning to the dialogue domain. Dialogue state tracking, dialogue act and intent classification, and slot tagging are jointly learned. Dialogue states and user utterances are encoded to provide hidden representations, which jointly affect all the other tasks.",
"Many previous systems are trained and compared over the ATIS (Airline Travel Information Systems) dataset BIBREF16, which covers only the flight-booking domain. Some of them also use bigger, not publicly available datasets, which appear to be similar to the NLU-BM in terms of number of intents and slots, but they cover no more than three or four domains. Our work stands out for its more challenging NLU setting, since we are dealing with a higher number of domains/scenarios (18), intents (64) and slots (54) in the NLU-BM dataset, and dialogue acts (11), frames (58) and frame elements (84) in the ROMULUS dataset. Moreover, we propose a multi-task hierarchical architecture, where each layer is trained to solve one of the three tasks. Each of these is tackled with a seq2seq classification using a CRF output layer, as in BIBREF3.",
"The NLU problem has been studied also on the Interactive Robotics front, mostly to support basic dialogue systems, with few dialogue states and tailored for specific tasks, such as semantic mapping BIBREF17, navigation BIBREF18, BIBREF19, or grounded language learning BIBREF20. However, the designed approaches, either based on formal languages or data-driven, have never been shown to scale to real world scenarios. The work of BIBREF21 makes a step forward in this direction. Their model still deals with the single `pick and place' domain, covering no more than two intents, but it is trained on several thousands of examples, making it able to manage more unstructured language. An attempt to manage a higher number of intents, as well as more variable language, is represented by the work of BIBREF22 where the sole Frame Semantics is applied to represent user intents, with no Dialogue Acts."
],
[
"The identification of Dialogue Acts (henceforth DAs) is required to drive the dialogue manager to the next dialogue state. General frame structures (FRs) provide a reference framework to capture user intents, in terms of required or desired actions that a conversational agent has to perform. Depending on the level of abstraction required by an application, these can be interpreted as more domain-dependent paradigms like intent, or to shallower representations, such as semantic frames, as conceived in FrameNet BIBREF23. From this perspective, semantic frames represent a versatile abstraction that can be mapped over an agent's capabilities, allowing also the system to be easily extended with new functionalities without requiring the definition of new ad-hoc structures. Similarly, frame arguments (ARs) act as slots in a traditional intent-slots scheme, or to frame elements for semantic frames.",
"In our work, the whole process of extracting a complete semantic interpretation as required by the system is tackled with a multi-task learning approach across DAs, FRs, and ARs. Each of these tasks is modelled as a seq2seq problem, where a task-specific label is assigned to each token of the sentence according to the IOB2 notation BIBREF24, with “B-” marking the Beginning of the chunk, “I-” the tokens Inside the chunk while “O-” is assigned to any token that does not belong to any chunk. Task labels are drawn from the set of classes defined for DAs, FRs, and ARs. Figure TABREF5 shows an example of the tagging layers over the sentence Where can I find Starbucks?, where Frame Semantics has been selected as underlying reference theory."
],
[
"The central motivation behind the proposed architecture is that there is a dependence among the three tasks of identifying DAs, FRs, and ARs. The relationship between tagging frame and arguments appears more evident, as also developed in theories like Frame Semantics – although it is defined independently by each theory. However, some degree of dependence also holds between the DAs and FRs. For example, the FrameNet semantic frame Desiring, expressing a desire of the user for an event to occur, is more likely to be used in the context of an Inform DA, which indicates the state of notifying the agent with an information, other than in an Instruction. This is clearly visible in interactions like “I'd like a cup of hot chocolate” or “I'd like to find a shoe shop”, where the user is actually notifying the agent about a desire of hers/his.",
"In order to reflect such inter-task dependence, the classification process is tackled here through a hierarchical multi-task learning approach. We designed a multi-layer neural network, whose architecture is shown in Figure FIGREF7, where each layer is trained to solve one of the three tasks, namely labelling dialogue acts ($DA$ layer), semantic frames ($FR$ layer), and frame elements ($AR$ layer). The layers are arranged in a hierarchical structure that allows the information produced by earlier layers to be fed to downstream tasks.",
"The network is mainly composed of three BiLSTM BIBREF25 encoding layers. A sequence of input words is initially converted into an embedded representation through an ELMo embeddings layer BIBREF26, and is fed to the $DA$ layer. The embedded representation is also passed over through shortcut connections BIBREF1, and concatenated with both the outputs of the $DA$ and $FR$ layers. Self-attention layers BIBREF27 are placed after the $DA$ and $FR$ BiLSTM encoders. Where $w_t$ is the input word at time step $t$ of the sentence $\\textbf {\\textrm {w}} = (w_1, ..., w_T)$, the architecture can be formalised by:",
"where $\\oplus $ represents the vector concatenation operator, $e_t$ is the embedding of the word at time $t$, and $\\textbf {\\textrm {s}}^{L}$ = ($s_1^L$, ..., $s_T^L$) is the embedded sequence output of each $L$ layer, with $L = \\lbrace DA, FR, AR\\rbrace $. Given an input sentence, the final sequence of labels $\\textbf {y}^L$ for each task is computed through a CRF tagging layer, which operates on the output of the $DA$ and $FR$ self-attention, and of the $AR$ BiLSTM embedding, so that:",
"where a$^{DA}$, a$^{FR}$ are attended embedded sequences. Due to shortcut connections, layers in the upper levels of the architecture can rely both on direct word embeddings as well as the hidden representation $a_t^L$ computed by a previous layer. Operationally, the latter carries task specific information which, combined with the input embeddings, helps in stabilising the classification of each CRF layer, as shown by our experiments. The network is trained by minimising the sum of the individual negative log-likelihoods of the three CRF layers, while at test time the most likely sequence is obtained through the Viterbi decoding over the output scores of the CRF layer."
],
[
"In order to assess the effectiveness of the proposed architecture and compare against existing off-the-shelf tools, we run several empirical evaluations."
],
[
"We tested the system on two datasets, different in size and complexity of the addressed language."
],
[
"The first (publicly available) dataset, NLU-Benchmark (NLU-BM), contains $25,716$ utterances annotated with targeted Scenario, Action, and involved Entities. For example, “schedule a call with Lisa on Monday morning” is labelled to contain a calendar scenario, where the set_event action is instantiated through the entities [event_name: a call with Lisa] and [date: Monday morning]. The Intent is then obtained by concatenating scenario and action labels (e.g., calendar_set_event). This dataset consists of multiple home assistant task domains (e.g., scheduling, playing music), chit-chat, and commands to a robot BIBREF7."
],
[
"The second dataset, ROMULUS, is composed of $1,431$ sentences, for each of which dialogue acts, semantic frames, and corresponding frame elements are provided. This dataset is being developed for modelling user utterances to open-domain conversational systems for robotic platforms that are expected to handle different interaction situations/patterns – e.g., chit-chat, command interpretation. The corpus is composed of different subsections, addressing heterogeneous linguistic phenomena, ranging from imperative instructions (e.g., “enter the bedroom slowly, turn left and turn the lights off ”) to complex requests for information (e.g., “good morning I want to buy a new mobile phone is there any shop nearby?”) or open-domain chit-chat (e.g., “nope thanks let's talk about cinema”). A considerable number of utterances in the dataset is collected through Human-Human Interaction studies in robotic domain ($\\approx $$70\\%$), though a small portion has been synthetically generated for balancing the frame distribution.",
"Note that while the NLU-BM is designed to have at most one intent per utterance, sentences are here tagged following the IOB2 sequence labelling scheme (see example of Figure TABREF5), so that multiple dialogue acts, frames, and frame elements can be defined at the same time for the same utterance. For example, three dialogue acts are identified within the sentence [good morning]$_{\\textsc {Opening}}$ [I want to buy a new mobile phone]$_{\\textsc {Inform}}$ [is there any shop nearby?]$_{\\textsc {Req\\_info}}$. As a result, though smaller, the ROMULUS dataset provides a richer representation of the sentence's semantics, making the tasks more complex and challenging. These observations are highlighted by the statistics in Table TABREF13, that show an average number of dialogue acts, frames and frame elements always greater than 1 (i.e., $1.33$, $1.41$ and $3.54$, respectively)."
],
[
"All the models are implemented with Keras BIBREF28 and Tensorflow BIBREF29 as backend, and run on a Titan Xp. Experiments are performed in a 10-fold setting, using one fold for tuning and one for testing. However, since HERMIT is designed to operate on dialogue acts, semantic frames and frame elements, the best hyperparameters are obtained over the ROMULUS dataset via a grid search using early stopping, and are applied also to the NLU-BM models. This guarantees fairness towards other systems, that do not perform any fine-tuning on the training data. We make use of pre-trained 1024-dim ELMo embeddings BIBREF26 as word vector representations without re-training the weights."
],
[
"This section shows the results obtained on the NLU-Benchmark (NLU-BM) dataset provided by BIBREF7, by comparing HERMIT to off-the-shelf NLU services, namely: Rasa, Dialogflow, LUIS and Watson. In order to apply HERMIT to NLU-BM annotations, these have been aligned so that Scenarios are treated as DAs, Actions as FRs and Entities as ARs.",
"To make our model comparable against other approaches, we reproduced the same folds as in BIBREF7, where a resized version of the original dataset is used. Table TABREF11 shows some statistics of the NLU-BM and its reduced version. Moreover, micro-averaged Precision, Recall and F1 are computed following the original paper to assure consistency. TP, FP and FN of intent labels are obtained as in any other multi-class task. An entity is instead counted as TP if there is an overlap between the predicted and the gold span, and their labels match.",
"Experimental results are reported in Table TABREF21. The statistical significance is evaluated through the Wilcoxon signed-rank test. When looking at the intent F1, HERMIT performs significantly better than Rasa $[Z=-2.701, p = .007]$ and LUIS $[Z=-2.807, p = .005]$. On the contrary, the improvements w.r.t. Dialogflow $[Z=-1.173, p = .241]$ do not seem to be significant. This is probably due to the high variance obtained by Dialogflow across the 10 folds. Watson is by a significant margin the most accurate system in recognising intents $[Z=-2.191, p = .028]$, especially due to its Precision score.",
"The hierarchical multi-task architecture of HERMIT seems to contribute strongly to entity tagging accuracy. In fact, in this task it performs significantly better than Rasa $[Z=-2.803, p = .005]$, Dialogflow $[Z=-2.803, p = .005]$, LUIS $[Z=-2.803, p = .005]$ and Watson $[Z=-2.805, p = .005]$, with improvements from $7.08$ to $35.92$ of F1.",
"Following BIBREF7, we then evaluated a metric that combines intent and entities, computed by simply summing up the two confusion matrices (Table TABREF23). Results highlight the contribution of the entity tagging task, where HERMIT outperforms the other approaches. Paired-samples t-tests were conducted to compare the HERMIT combined F1 against the other systems. The statistical analysis shows a significant improvement over Rasa $[Z=-2.803, p = .005]$, Dialogflow $[Z=-2.803, p = .005]$, LUIS $[Z=-2.803, p = .005]$ and Watson $[Z=-2.803, p = .005]$."
],
[
"In order to assess the contributions of the HERMIT's components, we performed an ablation study. The results are obtained on the NLU-BM, following the same setup as in Section SECREF16.",
"Results are shown in Table TABREF25. The first row refers to the complete architecture, while –SA shows the results of HERMIT without the self-attention mechanism. Then, from this latter we further remove shortcut connections (– SA/CN) and CRF taggers (– SA/CRF). The last row (– SA/CN/CRF) shows the results of a simple architecture, without self-attention, shortcuts, and CRF. Though not significant, the contribution of the several architectural components can be observed. The contribution of self-attention is distributed across all the tasks, with a small inclination towards the upstream ones. This means that while the entity tagging task is mostly lexicon independent, it is easier to identify pivoting keywords for predicting the intent, e.g. the verb “schedule” triggering the calendar_set_event intent. The impact of shortcut connections is more evident on entity tagging. In fact, the effect provided by shortcut connections is that the information flowing throughout the hierarchical architecture allows higher layers to encode richer representations (i.e., original word embeddings + latent semantics from the previous task). Conversely, the presence of the CRF tagger affects mainly the lower levels of the hierarchical architecture. This is not probably due to their position in the hierarchy, but to the way the tasks have been designed. In fact, while the span of an entity is expected to cover few tokens, in intent recognition (i.e., a combination of Scenario and Action recognition) the span always covers all the tokens of an utterance. CRF therefore preserves consistency of IOB2 sequences structure. However, HERMIT seems to be the most stable architecture, both in terms of standard deviation and task performance, with a good balance between intent and entity recognition."
],
[
"In this section we report the experiments performed on the ROMULUS dataset (Table TABREF27). Together with the evaluation metrics used in BIBREF7, we report the span F1, computed using the CoNLL-2000 shared task evaluation script, and the Exact Match (EM) accuracy of the entire sequence of labels. It is worth noticing that the EM Combined score is computed as the conjunction of the three individual predictions – e.g., a match is when all the three sequences are correct.",
"Results in terms of EM reflect the complexity of the different tasks, motivating their position within the hierarchy. Specifically, dialogue act identification is the easiest task ($89.31\\%$) with respect to frame ($82.60\\%$) and frame element ($79.73\\%$), due to the shallow semantics it aims to catch. However, when looking at the span F1, its score ($89.42\\%$) is lower than the frame element identification task ($92.26\\%$). What happens is that even though the label set is smaller, dialogue act spans are supposed to be longer than frame element ones, sometimes covering the whole sentence. Frame elements, instead, are often one or two tokens long, that contribute in increasing span based metrics. Frame identification is the most complex task for several reasons. First, lots of frame spans are interlaced or even nested; this contributes to increasing the network entropy. Second, while the dialogue act label is highly related to syntactic structures, frame identification is often subject to the inherent ambiguity of language (e.g., get can evoke both Commerce_buy and Arriving). We also report the metrics in BIBREF7 for consistency. For dialogue act and frame tasks, scores provide just the extent to which the network is able to detect those labels. In fact, the metrics do not consider any span information, essential to solve and evaluate our tasks. However, the frame element scores are comparable to the benchmark, since the task is very similar.",
"Overall, getting back to the combined EM accuracy, HERMIT seems to be promising, with the network being able to reproduce all the three gold sequences for almost $70\\%$ of the cases. The importance of this result provides an idea of the architecture behaviour over the entire pipeline."
],
[
"The experimental evaluation reported in this section provides different insights. The proposed architecture addresses the problem of NLU in wide-coverage conversational systems, modelling semantics through multiple Dialogue Acts and Frame-like structures in an end-to-end fashion. In addition, its hierarchical structure, which reflects the complexity of the single tasks, allows providing rich representations across the whole network. In this respect, we can affirm that the architecture successfully tackles the multi-task problem, with results that are promising in terms of usability and applicability of the system in real scenarios.",
"However, a thorough evaluation in the wild must be carried out, to assess to what extent the system is able to handle complex spoken language phenomena, such as repetitions, disfluencies, etc. To this end, a real scenario evaluation may open new research directions, by addressing new tasks to be included in the multi-task architecture. This is supported by the scalable nature of the proposed approach. Moreover, following BIBREF3, corpora providing different annotations can be exploited within the same multi-task network.",
"We also empirically showed how the same architectural design could be applied to a dataset addressing similar problems. In fact, a comparison with off-the-shelf tools shows the benefits provided by the hierarchical structure, with better overall performance better than any current solution. An ablation study has been performed, assessing the contribution provided by the different components of the network. The results show how the shortcut connections help in the more fine-grained tasks, successfully encoding richer representations. CRFs help when longer spans are being predicted, more present in the upstream tasks.",
"Finally, the seq2seq design allowed obtaining a multi-label approach, enabling the identification of multiple spans in the same utterance that might evoke different dialogue acts/frames. This represents a novelty for NLU in conversational systems, as such a problem has always been tackled as a single-intent detection. However, the seq2seq approach carries also some limitations, especially on the Frame Semantics side. In fact, label sequences are linear structures, not suitable for representing nested predicates, a tough and common problem in Natural Language. For example, in the sentence “I want to buy a new mobile phone”, the [to buy a new mobile phone] span represents both the Desired_event frame element of the Desiring frame and a Commerce_buy frame at the same time. At the moment of writing, we are working on modeling nested predicates through the application of bilinear models."
],
[
"We have started integrating a corpus of 5M sentences of real users chit-chatting with our conversational agent, though at the time of writing they represent only $16\\%$ of the current dataset.",
"As already pointed out in Section SECREF28, there are some limitations in the current approach that need to be addressed. First, we have to assess the network's capability in handling typical phenomena of spontaneous spoken language input, such as repetitions and disfluencies BIBREF30. This may open new research directions, by including new tasks to identify/remove any kind of noise from the spoken input. Second, the seq2seq scheme does not deal with nested predicates, a common aspect of Natural Language. To the best of our knowledge, there is no architecture that implements an end-to-end network for FrameNet based semantic parsing. Following previous work BIBREF2, one of our future goals is to tackle such problems through hierarchical multi-task architectures that rely on bilinear models."
],
[
"In this paper we presented HERMIT NLU, a hierarchical multi-task architecture for semantic parsing sentences for cross-domain spoken dialogue systems. The problem is addressed using a seq2seq model employing BiLSTM encoders and self-attention mechanisms and followed by CRF tagging layers. We evaluated HERMIT on a 25K sentences NLU-Benchmark and out-perform state-of-the-art NLU tools such as Rasa, Dialogflow, LUIS and Watson, even without specific fine-tuning of the model."
],
[
"This research was partially supported by the European Union's Horizon 2020 research and innovation programme under grant agreement No. 688147 (MuMMER project)."
]
]
} | {
"question": [
"Which publicly available NLU dataset is used?",
"What metrics other than entity tagging are compared?"
],
"question_id": [
"bd3ccb63fd8ce5575338d7332e96def7a3fabad6",
"7c794fa0b2818d354ca666969107818a2ffdda0c"
],
"nlp_background": [
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no"
],
"search_query": [
"",
""
],
"question_writer": [
"74eea9f3f4f790836045fcc75d0b3f5156901499",
"74eea9f3f4f790836045fcc75d0b3f5156901499"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"ROMULUS dataset",
"NLU-Benchmark dataset"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We tested the system on two datasets, different in size and complexity of the addressed language.",
"Experimental Evaluation ::: Datasets ::: NLU-Benchmark dataset",
"The first (publicly available) dataset, NLU-Benchmark (NLU-BM), contains $25,716$ utterances annotated with targeted Scenario, Action, and involved Entities. For example, “schedule a call with Lisa on Monday morning” is labelled to contain a calendar scenario, where the set_event action is instantiated through the entities [event_name: a call with Lisa] and [date: Monday morning]. The Intent is then obtained by concatenating scenario and action labels (e.g., calendar_set_event). This dataset consists of multiple home assistant task domains (e.g., scheduling, playing music), chit-chat, and commands to a robot BIBREF7.",
"Experimental Evaluation ::: Datasets ::: ROMULUS dataset",
"The second dataset, ROMULUS, is composed of $1,431$ sentences, for each of which dialogue acts, semantic frames, and corresponding frame elements are provided. This dataset is being developed for modelling user utterances to open-domain conversational systems for robotic platforms that are expected to handle different interaction situations/patterns – e.g., chit-chat, command interpretation. The corpus is composed of different subsections, addressing heterogeneous linguistic phenomena, ranging from imperative instructions (e.g., “enter the bedroom slowly, turn left and turn the lights off ”) to complex requests for information (e.g., “good morning I want to buy a new mobile phone is there any shop nearby?”) or open-domain chit-chat (e.g., “nope thanks let's talk about cinema”). A considerable number of utterances in the dataset is collected through Human-Human Interaction studies in robotic domain ($\\approx $$70\\%$), though a small portion has been synthetically generated for balancing the frame distribution."
],
"highlighted_evidence": [
"We tested the system on two datasets, different in size and complexity of the addressed language.\n\nExperimental Evaluation ::: Datasets ::: NLU-Benchmark dataset\nThe first (publicly available) dataset, NLU-Benchmark (NLU-BM), contains $25,716$ utterances annotated with targeted Scenario, Action, and involved Entities.",
"Experimental Evaluation ::: Datasets ::: ROMULUS dataset\nThe second dataset, ROMULUS, is composed of $1,431$ sentences, for each of which dialogue acts, semantic frames, and corresponding frame elements are provided. This dataset is being developed for modelling user utterances to open-domain conversational systems for robotic platforms that are expected to handle different interaction situations/patterns – e.g., chit-chat, command interpretation."
]
}
],
"annotation_id": [
"5a930493d6a639a26ac411ab221dea6fcf58ec5f"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"We also report the metrics in BIBREF7 for consistency",
"we report the span F1",
" Exact Match (EM) accuracy of the entire sequence of labels",
"metric that combines intent and entities"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Following BIBREF7, we then evaluated a metric that combines intent and entities, computed by simply summing up the two confusion matrices (Table TABREF23). Results highlight the contribution of the entity tagging task, where HERMIT outperforms the other approaches. Paired-samples t-tests were conducted to compare the HERMIT combined F1 against the other systems. The statistical analysis shows a significant improvement over Rasa $[Z=-2.803, p = .005]$, Dialogflow $[Z=-2.803, p = .005]$, LUIS $[Z=-2.803, p = .005]$ and Watson $[Z=-2.803, p = .005]$.",
"FLOAT SELECTED: Table 4: Comparison of HERMIT with the results in (Liu et al., 2019) by combining Intent and Entity.",
"In this section we report the experiments performed on the ROMULUS dataset (Table TABREF27). Together with the evaluation metrics used in BIBREF7, we report the span F1, computed using the CoNLL-2000 shared task evaluation script, and the Exact Match (EM) accuracy of the entire sequence of labels. It is worth noticing that the EM Combined score is computed as the conjunction of the three individual predictions – e.g., a match is when all the three sequences are correct.",
"Results in terms of EM reflect the complexity of the different tasks, motivating their position within the hierarchy. Specifically, dialogue act identification is the easiest task ($89.31\\%$) with respect to frame ($82.60\\%$) and frame element ($79.73\\%$), due to the shallow semantics it aims to catch. However, when looking at the span F1, its score ($89.42\\%$) is lower than the frame element identification task ($92.26\\%$). What happens is that even though the label set is smaller, dialogue act spans are supposed to be longer than frame element ones, sometimes covering the whole sentence. Frame elements, instead, are often one or two tokens long, that contribute in increasing span based metrics. Frame identification is the most complex task for several reasons. First, lots of frame spans are interlaced or even nested; this contributes to increasing the network entropy. Second, while the dialogue act label is highly related to syntactic structures, frame identification is often subject to the inherent ambiguity of language (e.g., get can evoke both Commerce_buy and Arriving). We also report the metrics in BIBREF7 for consistency. For dialogue act and frame tasks, scores provide just the extent to which the network is able to detect those labels. In fact, the metrics do not consider any span information, essential to solve and evaluate our tasks. However, the frame element scores are comparable to the benchmark, since the task is very similar."
],
"highlighted_evidence": [
"Following BIBREF7, we then evaluated a metric that combines intent and entities, computed by simply summing up the two confusion matrices (Table TABREF23). Results highlight the contribution of the entity tagging task, where HERMIT outperforms the other approaches. Paired-samples t-tests were conducted to compare the HERMIT combined F1 against the other systems.",
"FLOAT SELECTED: Table 4: Comparison of HERMIT with the results in (Liu et al., 2019) by combining Intent and Entity.",
"Together with the evaluation metrics used in BIBREF7, we report the span F1, computed using the CoNLL-2000 shared task evaluation script, and the Exact Match (EM) accuracy of the entire sequence of labels. It is worth noticing that the EM Combined score is computed as the conjunction of the three individual predictions – e.g., a match is when all the three sequences are correct.",
"We also report the metrics in BIBREF7 for consistency. For dialogue act and frame tasks, scores provide just the extent to which the network is able to detect those labels. In fact, the metrics do not consider any span information, essential to solve and evaluate our tasks."
]
}
],
"annotation_id": [
"01718f85054272860a30846dbf1eca25ecb9e512"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Figure 1: Dialogue Acts (DAs), Frames (FRs – here semantic frames) and Arguments (ARs – here frame elements) IOB2 tagging for the sentence Where can I find Starbucks?",
"Figure 2: HERMIT Network topology",
"Table 2: Statistics of the ROMULUS dataset.",
"Table 1: Statistics of the NLU-Benchmark dataset (Liu et al., 2019).",
"Table 3: Comparison of HERMIT with the results obtained in (Liu et al., 2019) for Intents and Entity Types.",
"Table 4: Comparison of HERMIT with the results in (Liu et al., 2019) by combining Intent and Entity.",
"Table 5: Ablation study of HERMIT on the NLU-BM.",
"Table 6: HERMIT performance over the ROMULUS dataset. P,R and F1 are evaluated following (Liu et al., 2019) metrics"
],
"file": [
"4-Figure1-1.png",
"4-Figure2-1.png",
"5-Table2-1.png",
"5-Table1-1.png",
"7-Table3-1.png",
"7-Table4-1.png",
"7-Table5-1.png",
"8-Table6-1.png"
]
} |
1908.10449 | Interactive Machine Comprehension with Information Seeking Agents | Existing machine reading comprehension (MRC) models do not scale effectively to real-world applications like web-level information retrieval and question answering (QA). We argue that this stems from the nature of MRC datasets: most of these are static environments wherein the supporting documents and all necessary information are fully observed. In this paper, we propose a simple method that reframes existing MRC datasets as interactive, partially observable environments. Specifically, we "occlude" the majority of a document's text and add context-sensitive commands that reveal "glimpses" of the hidden text to a model. We repurpose SQuAD and NewsQA as an initial case study, and then show how the interactive corpora can be used to train a model that seeks relevant information through sequential decision making. We believe that this setting can contribute in scaling models to web-level QA scenarios. | {
"section_name": [
"Introduction",
"Related Works",
"iMRC: Making MRC Interactive",
"iMRC: Making MRC Interactive ::: Interactive MRC as a POMDP",
"iMRC: Making MRC Interactive ::: Action Space",
"iMRC: Making MRC Interactive ::: Query Types",
"iMRC: Making MRC Interactive ::: Evaluation Metric",
"Baseline Agent",
"Baseline Agent ::: Model Structure",
"Baseline Agent ::: Model Structure ::: Encoder",
"Baseline Agent ::: Model Structure ::: Action Generator",
"Baseline Agent ::: Model Structure ::: Question Answerer",
"Baseline Agent ::: Memory and Reward Shaping ::: Memory",
"Baseline Agent ::: Memory and Reward Shaping ::: Reward Shaping",
"Baseline Agent ::: Memory and Reward Shaping ::: Ctrl+F Only Mode",
"Baseline Agent ::: Training Strategy",
"Baseline Agent ::: Training Strategy ::: Action Generation",
"Baseline Agent ::: Training Strategy ::: Question Answering",
"Experimental Results",
"Experimental Results ::: Mastering Training Games",
"Experimental Results ::: Generalizing to Test Set",
"Discussion and Future Work"
],
"paragraphs": [
[
"Many machine reading comprehension (MRC) datasets have been released in recent years BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 to benchmark a system's ability to understand and reason over natural language. Typically, these datasets require an MRC model to read through a document to answer a question about information contained therein.",
"The supporting document is, more often than not, static and fully observable. This raises concerns, since models may find answers simply through shallow pattern matching; e.g., syntactic similarity between the words in questions and documents. As pointed out by BIBREF5, for questions starting with when, models tend to predict the only date/time answer in the supporting document. Such behavior limits the generality and usefulness of MRC models, and suggests that they do not learn a proper `understanding' of the intended task. In this paper, to address this problem, we shift the focus of MRC data away from `spoon-feeding' models with sufficient information in fully observable, static documents. Instead, we propose interactive versions of existing MRC tasks, whereby the information needed to answer a question must be gathered sequentially.",
"The key idea behind our proposed interactive MRC (iMRC) is to restrict the document context that a model observes at one time. Concretely, we split a supporting document into its component sentences and withhold these sentences from the model. Given a question, the model must issue commands to observe sentences in the withheld set; we equip models with actions such as Ctrl+F (search for token) and stop for searching through partially observed documents. A model searches iteratively, conditioning each command on the input question and the sentences it has observed previously. Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL).",
"As an initial case study, we repurpose two well known, related corpora with different difficulty levels for our interactive MRC task: SQuAD and NewsQA. Table TABREF2 shows some examples of a model performing interactive MRC on these datasets. Naturally, our reframing makes the MRC problem harder; however, we believe the added demands of iMRC more closely match web-level QA and may lead to deeper comprehension of documents' content.",
"The main contributions of this work are as follows:",
"We describe a method to make MRC datasets interactive and formulate the new task as an RL problem.",
"We develop a baseline agent that combines a top performing MRC model and a state-of-the-art RL optimization algorithm and test it on our iMRC tasks.",
"We conduct experiments on several variants of iMRC and discuss the significant challenges posed by our setting."
],
[
"Skip-reading BIBREF6, BIBREF7, BIBREF8 is an existing setting in which MRC models read partial documents. Concretely, these methods assume that not all tokens in the input sequence are useful, and therefore learn to skip irrelevant tokens based on the current input and their internal memory. Since skipping decisions are discrete, the models are often optimized by the REINFORCE algorithm BIBREF9. For example, the structural-jump-LSTM proposed in BIBREF10 learns to skip and jump over chunks of text. In a similar vein, BIBREF11 designed a QA task where the model reads streaming data unidirectionally, without knowing when the question will be provided. Skip-reading approaches are limited in that they only consider jumping over a few consecutive tokens and the skipping operations are usually unidirectional. Based on the assumption that a single pass of reading may not provide sufficient information, multi-pass reading methods have also been studied BIBREF12, BIBREF13.",
"Compared to skip-reading and multi-turn reading, our work enables an agent to jump through a document in a more dynamic manner, in some sense combining aspects of skip-reading and re-reading. For example, it can jump forward, backward, or to an arbitrary position, depending on the query. This also distinguishes the model we develop in this work from ReasoNet BIBREF13, where an agent decides when to stop unidirectional reading.",
"Recently, BIBREF14 propose DocQN, which is a DQN-based agent that leverages the (tree) structure of documents and navigates across sentences and paragraphs. The proposed method has been shown to outperform vanilla DQN and IR baselines on TriviaQA dataset. The main differences between our work and DocQA include: iMRC does not depend on extra meta information of documents (e.g., title, paragraph title) for building document trees as in DocQN; our proposed environment is partially-observable, and thus an agent is required to explore and memorize the environment via interaction; the action space in our setting (especially for the Ctrl+F command as defined in later section) is arguably larger than the tree sampling action space in DocQN.",
"Closely related to iMRC is work by BIBREF15, in which the authors introduce a collection of synthetic tasks to train and test information-seeking capabilities in neural models. We extend that work by developing a realistic and challenging text-based task.",
"Broadly speaking, our approach is also linked to the optimal stopping problem in the literature Markov decision processes (MDP) BIBREF16, where at each time-step the agent either continues or stops and accumulates reward. Here, we reformulate conventional QA tasks through the lens of optimal stopping, in hopes of improving over the shallow matching behaviors exhibited by many MRC systems."
],
[
"We build the iSQuAD and iNewsQA datasets based on SQuAD v1.1 BIBREF0 and NewsQA BIBREF1. Both original datasets share similar properties. Specifically, every data-point consists of a tuple, $\\lbrace p, q, a\\rbrace $, where $p$ represents a paragraph, $q$ a question, and $a$ is the answer. The answer is a word span defined by head and tail positions in $p$. NewsQA is more difficult than SQuAD because it has a larger vocabulary, more difficult questions, and longer source documents.",
"We first split every paragraph $p$ into a list of sentences $\\mathcal {S} = \\lbrace s_1, s_2, ..., s_n\\rbrace $, where $n$ stands for number of sentences in $p$. Given a question $q$, rather than showing the entire paragraph $p$, we only show an agent the first sentence $s_1$ and withhold the rest. The agent must issue commands to reveal the hidden sentences progressively and thereby gather the information needed to answer question $q$.",
"An agent decides when to stop interacting and output an answer, but the number of interaction steps is limited. Once an agent has exhausted its step budget, it is forced to answer the question."
],
[
"As described in the previous section, we convert MRC tasks into sequential decision-making problems (which we will refer to as games). These can be described naturally within the reinforcement learning (RL) framework. Formally, tasks in iMRC are partially observable Markov decision processes (POMDP) BIBREF17. An iMRC data-point is a discrete-time POMDP defined by $(S, T, A, \\Omega , O, R, \\gamma )$, where $\\gamma \\in [0, 1]$ is the discount factor and the other elements are described in detail below.",
"Environment States ($S$): The environment state at turn $t$ in the game is $s_t \\in S$. It contains the complete internal information of the game, much of which is hidden from the agent. When an agent issues an action $a_t$, the environment transitions to state $s_{t+1}$ with probability $T(s_{t+1} | s_t, a_t)$). In this work, transition probabilities are either 0 or 1 (i.e., deterministic environment).",
"Actions ($A$): At each game turn $t$, the agent issues an action $a_t \\in A$. We will elaborate on the action space of iMRC in the action space section.",
"Observations ($\\Omega $): The text information perceived by the agent at a given game turn $t$ is the agent's observation, $o_t \\in \\Omega $, which depends on the environment state and the previous action with probability $O(o_t|s_t)$. In this work, observation probabilities are either 0 or 1 (i.e., noiseless observation). Reward Function ($R$): Based on its actions, the agent receives rewards $r_t = R(s_t, a_t)$. Its objective is to maximize the expected discounted sum of rewards $E \\left[\\sum _t \\gamma ^t r_t \\right]$."
],
[
"To better describe the action space of iMRC, we split an agent's actions into two phases: information gathering and question answering. During the information gathering phase, the agent interacts with the environment to collect knowledge. It answers questions with its accumulated knowledge in the question answering phase.",
"Information Gathering: At step $t$ of the information gathering phase, the agent can issue one of the following four actions to interact with the paragraph $p$, where $p$ consists of $n$ sentences and where the current observation corresponds to sentence $s_k,~1 \\le k \\le n$:",
"previous: jump to $ \\small {\\left\\lbrace \\begin{array}{ll} s_n & \\text{if $k = 1$,}\\\\ s_{k-1} & \\text{otherwise;} \\end{array}\\right.} $",
"next: jump to $ \\small {\\left\\lbrace \\begin{array}{ll} s_1 & \\text{if $k = n$,}\\\\ s_{k+1} & \\text{otherwise;} \\end{array}\\right.} $",
"Ctrl+F $<$query$>$: jump to the sentence that contains the next occurrence of “query”;",
"stop: terminate information gathering phase.",
"Question Answering: We follow the output format of both SQuAD and NewsQA, where an agent is required to point to the head and tail positions of an answer span within $p$. Assume that at step $t$ the agent stops interacting and the observation $o_t$ is $s_k$. The agent points to a head-tail position pair in $s_k$."
],
[
"Given the question “When is the deadline of AAAI?”, as a human, one might try searching “AAAI” on a search engine, follow the link to the official AAAI website, then search for keywords “deadline” or “due date” on the website to jump to a specific paragraph. Humans have a deep understanding of questions because of their significant background knowledge. As a result, the keywords they use to search are not limited to what appears in the question.",
"Inspired by this observation, we study 3 query types for the Ctrl+F $<$query$>$ command.",
"One token from the question: the setting with smallest action space. Because iMRC deals with Ctrl+F commands by exact string matching, there is no guarantee that all sentences are accessible from question tokens only.",
"One token from the union of the question and the current observation: an intermediate level where the action space is larger.",
"One token from the dataset vocabulary: the action space is huge (see Table TABREF16 for statistics of SQuAD and NewsQA). It is guaranteed that all sentences in all documents are accessible through these tokens."
],
[
"Since iMRC involves both MRC and RL, we adopt evaluation metrics from both settings. First, as a question answering task, we use $\\text{F}_1$ score to compare predicted answers against ground-truth, as in previous works. When there exist multiple ground-truth answers, we report the max $\\text{F}_1$ score. Second, mastering multiple games remains quite challenging for RL agents. Therefore, we evaluate an agent's performance during both its training and testing phases. During training, we report training curves averaged over 3 random seeds. During test, we follow common practice in supervised learning tasks where we report the agent's test performance corresponding to its best validation performance ."
],
[
"As a baseline, we propose QA-DQN, an agent that adopts components from QANet BIBREF18 and adds an extra command generation module inspired by LSTM-DQN BIBREF19.",
"As illustrated in Figure FIGREF6, the agent consists of three components: an encoder, an action generator, and a question answerer. More precisely, at a game step $t$, the encoder reads observation string $o_t$ and question string $q$ to generate attention aggregated hidden representations $M_t$. Using $M_t$, the action generator outputs commands (defined in previous sections) to interact with iMRC. If the generated command is stop or the agent is forced to stop, the question answerer takes the current information at game step $t$ to generate head and tail pointers for answering the question; otherwise, the information gathering procedure continues.",
"In this section, we describe the high-level model structure and training strategies of QA-DQN. We refer readers to BIBREF18 for detailed information. We will release datasets and code in the near future."
],
[
"In this section, we use game step $t$ to denote one round of interaction between an agent with the iMRC environment. We use $o_t$ to denote text observation at game step $t$ and $q$ to denote question text. We use $L$ to refer to a linear transformation. $[\\cdot ;\\cdot ]$ denotes vector concatenation."
],
[
"The encoder consists of an embedding layer, two stacks of transformer blocks (denoted as encoder transformer blocks and aggregation transformer blocks), and an attention layer.",
"In the embedding layer, we aggregate both word- and character-level embeddings. Word embeddings are initialized by the 300-dimension fastText BIBREF20 vectors trained on Common Crawl (600B tokens), and are fixed during training. Character embeddings are initialized by 200-dimension random vectors. A convolutional layer with 96 kernels of size 5 is used to aggregate the sequence of characters. We use a max pooling layer on the character dimension, then a multi-layer perceptron (MLP) of size 96 is used to aggregate the concatenation of word- and character-level representations. A highway network BIBREF21 is used on top of this MLP. The resulting vectors are used as input to the encoding transformer blocks.",
"Each encoding transformer block consists of four convolutional layers (with shared weights), a self-attention layer, and an MLP. Each convolutional layer has 96 filters, each kernel's size is 7. In the self-attention layer, we use a block hidden size of 96 and a single head attention mechanism. Layer normalization and dropout are applied after each component inside the block. We add positional encoding into each block's input. We use one layer of such an encoding block.",
"At a game step $t$, the encoder processes text observation $o_t$ and question $q$ to generate context-aware encodings $h_{o_t} \\in \\mathbb {R}^{L^{o_t} \\times H_1}$ and $h_q \\in \\mathbb {R}^{L^{q} \\times H_1}$, where $L^{o_t}$ and $L^{q}$ denote length of $o_t$ and $q$ respectively, $H_1$ is 96.",
"Following BIBREF18, we use a context-query attention layer to aggregate the two representations $h_{o_t}$ and $h_q$. Specifically, the attention layer first uses two MLPs to map $h_{o_t}$ and $h_q$ into the same space, with the resulting representations denoted as $h_{o_t}^{\\prime } \\in \\mathbb {R}^{L^{o_t} \\times H_2}$ and $h_q^{\\prime } \\in \\mathbb {R}^{L^{q} \\times H_2}$, in which, $H_2$ is 96.",
"Then, a tri-linear similarity function is used to compute the similarities between each pair of $h_{o_t}^{\\prime }$ and $h_q^{\\prime }$ items:",
"where $\\odot $ indicates element-wise multiplication and $w$ is trainable parameter vector of size 96.",
"We apply softmax to the resulting similarity matrix $S$ along both dimensions, producing $S^A$ and $S^B$. Information in the two representations are then aggregated as",
"where $h_{oq}$ is aggregated observation representation.",
"On top of the attention layer, a stack of aggregation transformer blocks is used to further map the observation representations to action representations and answer representations. The configuration parameters are the same as the encoder transformer blocks, except there are two convolution layers (with shared weights), and the number of blocks is 7.",
"Let $M_t \\in \\mathbb {R}^{L^{o_t} \\times H_3}$ denote the output of the stack of aggregation transformer blocks, in which $H_3$ is 96."
],
[
"The action generator takes $M_t$ as input and estimates Q-values for all possible actions. As described in previous section, when an action is a Ctrl+F command, it is composed of two tokens (the token “Ctrl+F” and the query token). Therefore, the action generator consists of three MLPs:",
"Here, the size of $L_{shared} \\in \\mathbb {R}^{95 \\times 150}$; $L_{action}$ has an output size of 4 or 2 depending on the number of actions available; the size of $L_{ctrlf}$ is the same as the size of a dataset's vocabulary size (depending on different query type settings, we mask out words in the vocabulary that are not query candidates). The overall Q-value is simply the sum of the two components:"
],
[
"Following BIBREF18, we append two extra stacks of aggregation transformer blocks on top of the encoder to compute head and tail positions:",
"Here, $M_{head}$ and $M_{tail}$ are outputs of the two extra transformer stacks, $L_0$, $L_1$, $L_2$ and $L_3$ are trainable parameters with output size 150, 150, 1 and 1, respectively."
],
[
"In iMRC, some questions may not be easily answerable based only on observation of a single sentence. To overcome this limitation, we provide an explicit memory mechanism to QA-DQN. Specifically, we use a queue to store strings that have been observed recently. The queue has a limited size of slots (we use queues of size [1, 3, 5] in this work). This prevents the agent from issuing next commands until the environment has been observed fully, in which case our task would degenerate to the standard MRC setting. The memory slots are reset episodically."
],
[
"Because the question answerer in QA-DQN is a pointing model, its performance relies heavily on whether the agent can find and stop at the sentence that contains the answer. We design a heuristic reward to encourage and guide this behavior. In particular, we assign a reward if the agent halts at game step $k$ and the answer is a sub-string of $o_k$ (if larger memory slots are used, we assign this reward if the answer is a sub-string of the memory at game step $k$). We denote this reward as the sufficient information reward, since, if an agent sees the answer, it should have a good chance of having gathered sufficient information for the question (although this is not guaranteed).",
"Note this sufficient information reward is part of the design of QA-DQN, whereas the question answering score is the only metric used to evaluate an agent's performance on the iMRC task."
],
[
"As mentioned above, an agent might bypass Ctrl+F actions and explore an iMRC game only via next commands. We study this possibility in an ablation study, where we limit the agent to the Ctrl+F and stop commands. In this setting, an agent is forced to explore by means of search a queries."
],
[
"In this section, we describe our training strategy. We split the training pipeline into two parts for easy comprehension. We use Adam BIBREF22 as the step rule for optimization in both parts, with the learning rate set to 0.00025."
],
[
"iMRC games are interactive environments. We use an RL training algorithm to train the interactive information-gathering behavior of QA-DQN. We adopt the Rainbow algorithm proposed by BIBREF23, which integrates several extensions to the original Deep Q-Learning algorithm BIBREF24. Rainbox exhibits state-of-the-art performance on several RL benchmark tasks (e.g., Atari games).",
"During game playing, we use a mini-batch of size 10 and push all transitions (observation string, question string, generated command, reward) into a replay buffer of size 500,000. We do not compute losses directly using these transitions. After every 5 game steps, we randomly sample a mini-batch of 64 transitions from the replay buffer, compute loss, and update the network.",
"Detailed hyper-parameter settings for action generation are shown in Table TABREF38."
],
[
"Similarly, we use another replay buffer to store question answering transitions (observation string when interaction stops, question string, ground-truth answer).",
"Because both iSQuAD and iNewsQA are converted from datasets that provide ground-truth answer positions, we can leverage this information and train the question answerer with supervised learning. Specifically, we only push question answering transitions when the ground-truth answer is in the observation string. For each transition, we convert the ground-truth answer head- and tail-positions from the SQuAD and NewsQA datasets to positions in the current observation string. After every 5 game steps, we randomly sample a mini-batch of 64 transitions from the replay buffer and train the question answerer using the Negative Log-Likelihood (NLL) loss. We use a dropout rate of 0.1."
],
[
"In this study, we focus on three factors and their effects on iMRC and the performance of the QA-DQN agent:",
"different Ctrl+F strategies, as described in the action space section;",
"enabled vs. disabled next and previous actions;",
"different memory slot sizes.",
"Below we report the baseline agent's training performance followed by its generalization performance on test data."
],
[
"It remains difficult for RL agents to master multiple games at the same time. In our case, each document-question pair can be considered a unique game, and there are hundred of thousands of them. Therefore, as is common practice in the RL literature, we study an agent's training curves.",
"Due to the space limitations, we select several representative settings to discuss in this section and provide QA-DQN's training and evaluation curves for all experimental settings in the Appendix. We provide the agent's sufficient information rewards (i.e., if the agent stopped at a state where the observation contains the answer) during training in Appendix as well.",
"Figure FIGREF36 shows QA-DQN's training performance ($\\text{F}_1$ score) when next and previous actions are available. Figure FIGREF40 shows QA-DQN's training performance ($\\text{F}_1$ score) when next and previous actions are disabled. Note that all training curves are averaged over 3 runs with different random seeds and all evaluation curves show the one run with max validation performance among the three.",
"From Figure FIGREF36, we can see that the three Ctrl+F strategies show similar difficulty levels when next and previous are available, although QA-DQN works slightly better when selecting a word from the question as query (especially on iNewsQA). However, from Figure FIGREF40 we observe that when next and previous are disabled, QA-DQN shows significant advantage when selecting a word from the question as query. This may due to the fact that when an agent must use Ctrl+F to navigate within documents, the set of question words is a much smaller action space in contrast to the other two settings. In the 4-action setting, an agent can rely on issuing next and previous actions to reach any sentence in a document.",
"The effect of action space size on model performance is particularly clear when using a datasets' entire vocabulary as query candidates in the 2-action setting. From Figure FIGREF40 (and figures with sufficient information rewards in the Appendix) we see QA-DQN has a hard time learning in this setting. As shown in Table TABREF16, both datasets have a vocabulary size of more than 100k. This is much larger than in the other two settings, where on average the length of questions is around 10. This suggests that the methods with better sample efficiency are needed to act in more realistic problem settings with huge action spaces.",
"Experiments also show that a larger memory slot size always helps. Intuitively, with a memory mechanism (either implicit or explicit), an agent could make the environment closer to fully observed by exploring and memorizing observations. Presumably, a larger memory may further improve QA-DQN's performance, but considering the average number of sentences in each iSQuAD game is 5, a memory with more than 5 slots will defeat the purpose of our study of partially observable text environments.",
"Not surprisingly, QA-DQN performs worse in general on iNewsQA, in all experiments. As shown in Table TABREF16, the average number of sentences per document in iNewsQA is about 6 times more than in iSQuAD. This is analogous to games with larger maps in the RL literature, where the environment is partially observable. A better exploration (in our case, jumping) strategy may help QA-DQN to master such harder games."
],
[
"To study QA-DQN's ability to generalize, we select the best performing agent in each experimental setting on the validation set and report their performance on the test set. The agent's test performance is reported in Table TABREF41. In addition, to support our claim that the challenging part of iMRC tasks is information seeking rather than answering questions given sufficient information, we also report the $\\text{F}_1$ score of an agent when it has reached the piece of text that contains the answer, which we denote as $\\text{F}_{1\\text{info}}$.",
"From Table TABREF41 (and validation curves provided in appendix) we can observe that QA-DQN's performance during evaluation matches its training performance in most settings. $\\text{F}_{1\\text{info}}$ scores are consistently higher than the overall $\\text{F}_1$ scores, and they have much less variance across different settings. This supports our hypothesis that information seeking play an important role in solving iMRC tasks, whereas question answering given necessary information is relatively straightforward. This also suggests that an interactive agent that can better navigate to important sentences is very likely to achieve better performance on iMRC tasks."
],
[
"In this work, we propose and explore the direction of converting MRC datasets into interactive environments. We believe interactive, information-seeking behavior is desirable for neural MRC systems when knowledge sources are partially observable and/or too large to encode in their entirety — for instance, when searching for information on the internet, where knowledge is by design easily accessible to humans through interaction.",
"Despite being restricted, our proposed task presents major challenges to existing techniques. iMRC lies at the intersection of NLP and RL, which is arguably less studied in existing literature. We hope to encourage researchers from both NLP and RL communities to work toward solving this task.",
"For our baseline, we adopted an off-the-shelf, top-performing MRC model and RL method. Either component can be replaced straightforwardly with other methods (e.g., to utilize a large-scale pretrained language model).",
"Our proposed setup and baseline agent presently use only a single word with the query command. However, a host of other options should be considered in future work. For example, multi-word queries with fuzzy matching are more realistic. It would also be interesting for an agent to generate a vector representation of the query in some latent space. This vector could then be compared with precomputed document representations (e.g., in an open domain QA dataset) to determine what text to observe next, with such behavior tantamount to learning to do IR.",
"As mentioned, our idea for reformulating existing MRC datasets as partially observable and interactive environments is straightforward and general. Almost all MRC datasets can be used to study interactive, information-seeking behavior through similar modifications. We hypothesize that such behavior can, in turn, help in solving real-world MRC problems involving search."
]
]
} | {
"question": [
"Do they provide decision sequences as supervision while training models?",
"What are the models evaluated on?",
"How do they train models in this setup?",
"What commands does their setup provide to models seeking information?"
],
"question_id": [
"1ef5fc4473105f1c72b4d35cf93d312736833d3d",
"5f9bd99a598a4bbeb9d2ac46082bd3302e961a0f",
"b2fab9ffbcf1d6ec6d18a05aeb6e3ab9a4dbf2ae",
"e9cf1b91f06baec79eb6ddfd91fc5d434889f652"
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat",
"somewhat"
],
"search_query": [
"information seeking",
"information seeking",
"information seeking",
"information seeking"
],
"question_writer": [
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"The key idea behind our proposed interactive MRC (iMRC) is to restrict the document context that a model observes at one time. Concretely, we split a supporting document into its component sentences and withhold these sentences from the model. Given a question, the model must issue commands to observe sentences in the withheld set; we equip models with actions such as Ctrl+F (search for token) and stop for searching through partially observed documents. A model searches iteratively, conditioning each command on the input question and the sentences it has observed previously. Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL)."
],
"highlighted_evidence": [
"The key idea behind our proposed interactive MRC (iMRC) is to restrict the document context that a model observes at one time. Concretely, we split a supporting document into its component sentences and withhold these sentences from the model. Given a question, the model must issue commands to observe sentences in the withheld set; we equip models with actions such as Ctrl+F (search for token) and stop for searching through partially observed documents. A model searches iteratively, conditioning each command on the input question and the sentences it has observed previously. Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL)."
]
}
],
"annotation_id": [
"6704ca0608ed345578616637b277f39d9fff4c98"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "They evaluate F1 score and agent's test performance on their own built interactive datasets (iSQuAD and iNewsQA)",
"evidence": [
"We build the iSQuAD and iNewsQA datasets based on SQuAD v1.1 BIBREF0 and NewsQA BIBREF1. Both original datasets share similar properties. Specifically, every data-point consists of a tuple, $\\lbrace p, q, a\\rbrace $, where $p$ represents a paragraph, $q$ a question, and $a$ is the answer. The answer is a word span defined by head and tail positions in $p$. NewsQA is more difficult than SQuAD because it has a larger vocabulary, more difficult questions, and longer source documents.",
"iMRC: Making MRC Interactive ::: Evaluation Metric",
"Since iMRC involves both MRC and RL, we adopt evaluation metrics from both settings. First, as a question answering task, we use $\\text{F}_1$ score to compare predicted answers against ground-truth, as in previous works. When there exist multiple ground-truth answers, we report the max $\\text{F}_1$ score. Second, mastering multiple games remains quite challenging for RL agents. Therefore, we evaluate an agent's performance during both its training and testing phases. During training, we report training curves averaged over 3 random seeds. During test, we follow common practice in supervised learning tasks where we report the agent's test performance corresponding to its best validation performance ."
],
"highlighted_evidence": [
"We build the iSQuAD and iNewsQA datasets based on SQuAD v1.1 BIBREF0 and NewsQA BIBREF1.",
"iMRC: Making MRC Interactive ::: Evaluation Metric\nSince iMRC involves both MRC and RL, we adopt evaluation metrics from both settings. First, as a question answering task, we use $\\text{F}_1$ score to compare predicted answers against ground-truth, as in previous works. When there exist multiple ground-truth answers, we report the max $\\text{F}_1$ score. Second, mastering multiple games remains quite challenging for RL agents. Therefore, we evaluate an agent's performance during both its training and testing phases. During training, we report training curves averaged over 3 random seeds. During test, we follow common practice in supervised learning tasks where we report the agent's test performance corresponding to its best validation performance ."
]
}
],
"annotation_id": [
"01735ec7a3f9a56955a8d3c9badc04bbd753771f"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL)."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The key idea behind our proposed interactive MRC (iMRC) is to restrict the document context that a model observes at one time. Concretely, we split a supporting document into its component sentences and withhold these sentences from the model. Given a question, the model must issue commands to observe sentences in the withheld set; we equip models with actions such as Ctrl+F (search for token) and stop for searching through partially observed documents. A model searches iteratively, conditioning each command on the input question and the sentences it has observed previously. Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL)."
],
"highlighted_evidence": [
"The key idea behind our proposed interactive MRC (iMRC) is to restrict the document context that a model observes at one time. Concretely, we split a supporting document into its component sentences and withhold these sentences from the model. Given a question, the model must issue commands to observe sentences in the withheld set; we equip models with actions such as Ctrl+F (search for token) and stop for searching through partially observed documents. A model searches iteratively, conditioning each command on the input question and the sentences it has observed previously. Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL)."
]
}
],
"annotation_id": [
"33b2d26d064251c196238b0c3c455b208680f5fc"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"previous",
"next",
"Ctrl+F $<$query$>$",
"stop"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Information Gathering: At step $t$ of the information gathering phase, the agent can issue one of the following four actions to interact with the paragraph $p$, where $p$ consists of $n$ sentences and where the current observation corresponds to sentence $s_k,~1 \\le k \\le n$:",
"previous: jump to $ \\small {\\left\\lbrace \\begin{array}{ll} s_n & \\text{if $k = 1$,}\\\\ s_{k-1} & \\text{otherwise;} \\end{array}\\right.} $",
"next: jump to $ \\small {\\left\\lbrace \\begin{array}{ll} s_1 & \\text{if $k = n$,}\\\\ s_{k+1} & \\text{otherwise;} \\end{array}\\right.} $",
"Ctrl+F $<$query$>$: jump to the sentence that contains the next occurrence of “query”;",
"stop: terminate information gathering phase."
],
"highlighted_evidence": [
"Information Gathering: At step $t$ of the information gathering phase, the agent can issue one of the following four actions to interact with the paragraph $p$, where $p$ consists of $n$ sentences and where the current observation corresponds to sentence $s_k,~1 \\le k \\le n$:\n\nprevious: jump to $ \\small {\\left\\lbrace \\begin{array}{ll} s_n & \\text{if $k = 1$,}\\\\ s_{k-1} & \\text{otherwise;} \\end{array}\\right.} $\n\nnext: jump to $ \\small {\\left\\lbrace \\begin{array}{ll} s_1 & \\text{if $k = n$,}\\\\ s_{k+1} & \\text{otherwise;} \\end{array}\\right.} $\n\nCtrl+F $<$query$>$: jump to the sentence that contains the next occurrence of “query”;\n\nstop: terminate information gathering phase."
]
}
],
"annotation_id": [
"7f1096ea26f2374fc33f07acc67d80aeb7004dc2"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Table 1: Examples of interactive machine reading comprehension behavior. In the upper example, the agent has no memory of past observations, and thus it answers questions only with observation string at current step. In the lower example, the agent is able to use its memory to find answers.",
"Figure 1: A demonstration of the proposed iMRC pipeline, in which the QA-DQN agent is illustrated in shaddow. At a game step t, QA-DQN encodes the question and text observation into hidden representations Mt. An action generator takes Mt as input to generate commands to interact with the environment. If the agent generates stop at this game step, Mt is used to answer question by a question answerer. Otherwise, the iMRC environment will provide new text observation in response of the generated action.",
"Table 2: Statistics of iSQuAD and iNewsQA.",
"Figure 2: 4-action setting: QA-DQN’s F1 scores during training on iSQuAD and iNewsQA datasets with different Ctrl+F strategies and cache sizes. next and previous commands are available.",
"Table 3: Hyper-parameter setup for action generation.",
"Table 4: Experimental results on test set. #Action 4 denotes the settings as described in the action space section, #Action 2 indicates the setting where only Ctrl+F and stop are available. F1info indicates an agent’s F1 score iff sufficient information is in its observation.",
"Figure 3: 2-action setting: QA-DQN’s F1 scores during training on iSQuAD and iNewsQA datasets when using different Ctrl+F strategies and cache sizes. Note that next and previous are disabled.",
"Figure 4: Performance on iSQuAD training set. next and previous actions are available.",
"Figure 6: Performance on iSQuAD training set. next and previous actions are unavailable.",
"Figure 5: Performance on iSQuAD validation set. next and previous actions are available.",
"Figure 7: Performance on iSQuAD validation set. next and previous actions are unavailable.",
"Figure 8: Performance on iNewsQA training set. next and previous actions are available.",
"Figure 10: Performance on iNewsQA training set. next and previous actions are unavailable.",
"Figure 9: Performance on iNewsQA validation set. next and previous actions are available.",
"Figure 11: Performance on iNewsQA validation set. next and previous actions are unavailable."
],
"file": [
"1-Table1-1.png",
"2-Figure1-1.png",
"3-Table2-1.png",
"5-Figure2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"6-Figure3-1.png",
"9-Figure4-1.png",
"9-Figure6-1.png",
"9-Figure5-1.png",
"9-Figure7-1.png",
"10-Figure8-1.png",
"10-Figure10-1.png",
"10-Figure9-1.png",
"10-Figure11-1.png"
]
} |
1910.03814 | Exploring Hate Speech Detection in Multimodal Publications | In this work we target the problem of hate speech detection in multimodal publications formed by a text and an image. We gather and annotate a large scale dataset from Twitter, MMHS150K, and propose different models that jointly analyze textual and visual information for hate speech detection, comparing them with unimodal detection. We provide quantitative and qualitative results and analyze the challenges of the proposed task. We find that, even though images are useful for the hate speech detection task, current multimodal models cannot outperform models analyzing only text. We discuss why and open the field and the dataset for further research. | {
"section_name": [
"Introduction",
"Related Work ::: Hate Speech Detection",
"Related Work ::: Visual and Textual Data Fusion",
"The MMHS150K dataset",
"The MMHS150K dataset ::: Tweets Gathering",
"The MMHS150K dataset ::: Textual Image Filtering",
"The MMHS150K dataset ::: Annotation",
"Methodology ::: Unimodal Treatment ::: Images.",
"Methodology ::: Unimodal Treatment ::: Tweet Text.",
"Methodology ::: Unimodal Treatment ::: Image Text.",
"Methodology ::: Multimodal Architectures",
"Methodology ::: Multimodal Architectures ::: Feature Concatenation Model (FCM)",
"Methodology ::: Multimodal Architectures ::: Spatial Concatenation Model (SCM)",
"Methodology ::: Multimodal Architectures ::: Textual Kernels Model (TKM)",
"Methodology ::: Multimodal Architectures ::: Training",
"Results",
"Conclusions"
],
"paragraphs": [
[
"Social Media platforms such as Facebook, Twitter or Reddit have empowered individuals' voices and facilitated freedom of expression. However they have also been a breeding ground for hate speech and other types of online harassment. Hate speech is defined in legal literature as speech (or any form of expression) that expresses (or seeks to promote, or has the capacity to increase) hatred against a person or a group of people because of a characteristic they share, or a group to which they belong BIBREF0. Twitter develops this definition in its hateful conduct policy as violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.",
"In this work we focus on hate speech detection. Due to the inherent complexity of this task, it is important to distinguish hate speech from other types of online harassment. In particular, although it might be offensive to many people, the sole presence of insulting terms does not itself signify or convey hate speech. And, the other way around, hate speech may denigrate or threaten an individual or a group of people without the use of any profanities. People from the african-american community, for example, often use the term nigga online, in everyday language, without malicious intentions to refer to folks within their community, and the word cunt is often used in non hate speech publications and without any sexist purpose. The goal of this work is not to discuss if racial slur, such as nigga, should be pursued. The goal is to distinguish between publications using offensive terms and publications attacking communities, which we call hate speech.",
"Modern social media content usually include images and text. Some of these multimodal publications are only hate speech because of the combination of the text with a certain image. That is because, as we have stated, the presence of offensive terms does not itself signify hate speech, and the presence of hate speech is often determined by the context of a publication. Moreover, users authoring hate speech tend to intentionally construct publications where the text is not enough to determine they are hate speech. This happens especially in Twitter, where multimodal tweets are formed by an image and a short text, which in many cases is not enough to judge them. In those cases, the image might give extra context to make a proper judgement. Fig. FIGREF5 shows some of such examples in MMHS150K.",
"The contributions of this work are as follows:",
"[noitemsep,leftmargin=*]",
"We propose the novel task of hate speech detection in multimodal publications, collect, annotate and publish a large scale dataset.",
"We evaluate state of the art multimodal models on this specific task and compare their performance with unimodal detection. Even though images are proved to be useful for hate speech detection, the proposed multimodal models do not outperform unimodal textual models.",
"We study the challenges of the proposed task, and open the field for future research."
],
[
"The literature on detecting hate speech on online textual publications is extensive. Schmidt and Wiegand BIBREF1 recently provided a good survey of it, where they review the terminology used over time, the features used, the existing datasets and the different approaches. However, the field lacks a consistent dataset and evaluation protocol to compare proposed methods. Saleem et al. BIBREF2 compare different classification methods detecting hate speech in Reddit and other forums. Wassem and Hovy BIBREF3 worked on hate speech detection on twitter, published a manually annotated dataset and studied its hate distribution. Later Wassem BIBREF4 extended the previous published dataset and compared amateur and expert annotations, concluding that amateur annotators are more likely than expert annotators to label items as hate speech. Park and Fung BIBREF5 worked on Wassem datasets and proposed a classification method using a CNN over Word2Vec BIBREF6 word embeddings, showing also classification results on racism and sexism hate sub-classes. Davidson et al. BIBREF7 also worked on hate speech detection on twitter, publishing another manually annotated dataset. They test different classifiers such as SVMs and decision trees and provide a performance comparison. Malmasi and Zampieri BIBREF8 worked on Davidson's dataset improving his results using more elaborated features. ElSherief et al. BIBREF9 studied hate speech on twitter and selected the most frequent terms in hate tweets based on Hatebase, a hate expression repository. They propose a big hate dataset but it lacks manual annotations, and all the tweets containing certain hate expressions are considered hate speech. Zhang et al. BIBREF10 recently proposed a more sophisticated approach for hate speech detection, using a CNN and a GRU BIBREF11 over Word2Vec BIBREF6 word embeddings. They show experiments in different datasets outperforming previous methods. Next, we summarize existing hate speech datasets:",
"[noitemsep,leftmargin=*]",
"RM BIBREF10: Formed by $2,435$ tweets discussing Refugees and Muslims, annotated as hate or non-hate.",
"DT BIBREF7: Formed by $24,783$ tweets annotated as hate, offensive language or neither. In our work, offensive language tweets are considered as non-hate.",
"WZ-LS BIBREF5: A combination of Wassem datasets BIBREF4, BIBREF3 labeled as racism, sexism, neither or both that make a total of $18,624$ tweets.",
"Semi-Supervised BIBREF9: Contains $27,330$ general hate speech Twitter tweets crawled in a semi-supervised manner.",
"Although often modern social media publications include images, not too many contributions exist that exploit visual information. Zhong et al. BIBREF12 worked on classifying Instagram images as potential cyberbullying targets, exploiting both the image content, the image caption and the comments. However, their visual information processing is limited to the use of features extracted by a pre-trained CNN, the use of which does not achieve any improvement. Hosseinmardi et al. BIBREF13 also address the problem of detecting cyberbullying incidents on Instagram exploiting both textual and image content. But, again, their visual information processing is limited to use the features of a pre-trained CNN, and the improvement when using visual features on cyberbullying classification is only of 0.01%."
],
[
"A typical task in multimodal visual and textual analysis is to learn an alignment between feature spaces. To do that, usually a CNN and a RNN are trained jointly to learn a joint embedding space from aligned multimodal data. This approach is applied in tasks such as image captioning BIBREF14, BIBREF15 and multimodal image retrieval BIBREF16, BIBREF17. On the other hand, instead of explicitly learning an alignment between two spaces, the goal of Visual Question Answering (VQA) is to merge both data modalities in order to decide which answer is correct. This problem requires modeling very precise correlations between the image and the question representations. The VQA task requirements are similar to our hate speech detection problem in multimodal publications, where we have a visual and a textual input and we need to combine both sources of information to understand the global context and make a decision. We thus take inspiration from the VQA literature for the tested models. Early VQA methods BIBREF18 fuse textual and visual information by feature concatenation. Later methods, such as Multimodal Compact Bilinear pooling BIBREF19, utilize bilinear pooling to learn multimodal features. An important limitation of these methods is that the multimodal features are fused in the latter model stage, so the textual and visual relationships are modeled only in the last layers. Another limitation is that the visual features are obtained by representing the output of the CNN as a one dimensional vector, which losses the spatial information of the input images. In a recent work, Gao et al. BIBREF20 propose a feature fusion scheme to overcome these limitations. They learn convolution kernels from the textual information –which they call question-guided kernels– and convolve them with the visual information in an earlier stage to get the multimodal features. Margffoy-Tuay et al. BIBREF21 use a similar approach to combine visual and textual information, but they address a different task: instance segmentation guided by natural language queries. We inspire in these latest feature fusion works to build the models for hate speech detection."
],
[
"Existing hate speech datasets contain only textual data. Moreover, a reference benchmark does not exists. Most of the published datasets are crawled from Twitter and distributed as tweet IDs but, since Twitter removes reported user accounts, an important amount of their hate tweets is no longer accessible. We create a new manually annotated multimodal hate speech dataset formed by $150,000$ tweets, each one of them containing text and an image. We call the dataset MMHS150K, and made it available online . In this section, we explain the dataset creation steps."
],
[
"We used the Twitter API to gather real-time tweets from September 2018 until February 2019, selecting the ones containing any of the 51 Hatebase terms that are more common in hate speech tweets, as studied in BIBREF9. We filtered out retweets, tweets containing less than three words and tweets containing porn related terms. From that selection, we kept the ones that included images and downloaded them. Twitter applies hate speech filters and other kinds of content control based on its policy, although the supervision is based on users' reports. Therefore, as we are gathering tweets from real-time posting, the content we get has not yet passed any filter."
],
[
"We aim to create a multimodal hate speech database where all the instances contain visual and textual information that we can later process to determine if a tweet is hate speech or not. But a considerable amount of the images of the selected tweets contain only textual information, such as screenshots of other tweets. To ensure that all the dataset instances contain both visual and textual information, we remove those tweets. To do that, we use TextFCN BIBREF22, BIBREF23 , a Fully Convolutional Network that produces a pixel wise text probability map of an image. We set empirical thresholds to discard images that have a substantial total text probability, filtering out $23\\%$ of the collected tweets."
],
[
"We annotate the gathered tweets using the crowdsourcing platform Amazon Mechanical Turk. There, we give the workers the definition of hate speech and show some examples to make the task clearer. We then show the tweet text and image and we ask them to classify it in one of 6 categories: No attacks to any community, racist, sexist, homophobic, religion based attacks or attacks to other communities. Each one of the $150,000$ tweets is labeled by 3 different workers to palliate discrepancies among workers.",
"We received a lot of valuable feedback from the annotators. Most of them had understood the task correctly, but they were worried because of its subjectivity. This is indeed a subjective task, highly dependent on the annotator convictions and sensitivity. However, we expect to get cleaner annotations the more strong the attack is, which are the publications we are more interested on detecting. We also detected that several users annotate tweets for hate speech just by spotting slur. As already said previously, just the use of particular words can be offensive to many people, but this is not the task we aim to solve. We have not included in our experiments those hits that were made in less than 3 seconds, understanding that it takes more time to grasp the multimodal context and make a decision.",
"We do a majority voting between the three annotations to get the tweets category. At the end, we obtain $112,845$ not hate tweets and $36,978$ hate tweets. The latest are divided in $11,925$ racist, $3,495$ sexist, $3,870$ homophobic, 163 religion-based hate and $5,811$ other hate tweets (Fig. FIGREF17). In this work, we do not use hate sub-categories, and stick to the hate / not hate split. We separate balanced validation ($5,000$) and test ($10,000$) sets. The remaining tweets are used for training.",
"We also experimented using hate scores for each tweet computed given the different votes by the three annotators instead of binary labels. The results did not present significant differences to those shown in the experimental part of this work, but the raw annotations will be published nonetheless for further research.",
"As far as we know, this dataset is the biggest hate speech dataset to date, and the first multimodal hate speech dataset. One of its challenges is to distinguish between tweets using the same key offensive words that constitute or not an attack to a community (hate speech). Fig. FIGREF18 shows the percentage of hate and not hate tweets of the top keywords."
],
[
"All images are resized such that their shortest size has 500 pixels. During training, online data augmentation is applied as random cropping of $299\\times 299$ patches and mirroring. We use a CNN as the image features extractor which is an Imagenet BIBREF24 pre-trained Google Inception v3 architecture BIBREF25. The fine-tuning process of the Inception v3 layers aims to modify its weights to extract the features that, combined with the textual information, are optimal for hate speech detection."
],
[
"We train a single layer LSTM with a 150-dimensional hidden state for hate / not hate classification. The input dimensionality is set to 100 and GloVe BIBREF26 embeddings are used as word input representations. Since our dataset is not big enough to train a GloVe word embedding model, we used a pre-trained model that has been trained in two billion tweets. This ensures that the model will be able to produce word embeddings for slang and other words typically used in Twitter. To process the tweets text before generating the word embeddings, we use the same pipeline as the model authors, which includes generating symbols to encode Twitter special interactions such as user mentions (@user) or hashtags (#hashtag). To encode the tweet text and input it later to multimodal models, we use the LSTM hidden state after processing the last tweet word. Since the LSTM has been trained for hate speech classification, it extracts the most useful information for this task from the text, which is encoded in the hidden state after inputting the last tweet word."
],
[
"The text in the image can also contain important information to decide if a publication is hate speech or not, so we extract it and also input it to our model. To do so, we use Google Vision API Text Detection module BIBREF27. We input the tweet text and the text from the image separately to the multimodal models, so it might learn different relations between them and between them and the image. For instance, the model could learn to relate the image text with the area in the image where the text appears, so it could learn to interpret the text in a different way depending on the location where it is written in the image. The image text is also encoded by the LSTM as the hidden state after processing its last word."
],
[
"The objective of this work is to build a hate speech detector that leverages both textual and visual data and detects hate speech publications based on the context given by both data modalities. To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models: a Feature Concatenation Model (FCM), a Spatial Concatenation Model (SCM) and a Textual Kernels Model (TKM). All of them are CNN+RNN models with three inputs: the tweet image, the tweet text and the text appearing in the image (if any)."
],
[
"The image is fed to the Inception v3 architecture and the 2048 dimensional feature vector after the last average pooling layer is used as the visual representation. This vector is then concatenated with the 150 dimension vectors of the LSTM last word hidden states of the image text and the tweet text, resulting in a 2348 feature vector. This vector is then processed by three fully connected layers of decreasing dimensionality $(2348, 1024, 512)$ with following corresponding batch normalization and ReLu layers until the dimensions are reduced to two, the number of classes, in the last classification layer. The FCM architecture is illustrated in Fig. FIGREF26."
],
[
"Instead of using the latest feature vector before classification of the Inception v3 as the visual representation, in the SCM we use the $8\\times 8\\times 2048$ feature map after the last Inception module. Then we concatenate the 150 dimension vectors encoding the tweet text and the tweet image text at each spatial location of that feature map. The resulting multimodal feature map is processed by two Inception-E blocks BIBREF28. After that, dropout and average pooling are applied and, as in the FCM model, three fully connected layers are used to reduce the dimensionality until the classification layer."
],
[
"The TKM design, inspired by BIBREF20 and BIBREF21, aims to capture interactions between the two modalities more expressively than concatenation models. As in SCM we use the $8\\times 8\\times 2048$ feature map after the last Inception module as the visual representation. From the 150 dimension vector encoding the tweet text, we learn $K_t$ text dependent kernels using independent fully connected layers that are trained together with the rest of the model. The resulting $K_t$ text dependent kernels will have dimensionality of $1\\times 1\\times 2048$. We do the same with the feature vector encoding the image text, learning $K_{it}$ kernels. The textual kernels are convolved with the visual feature map in the channel dimension at each spatial location, resulting in a $8\\times 8\\times (K_i+K_{it})$ multimodal feature map, and batch normalization is applied. Then, as in the SCM, the 150 dimension vectors encoding the tweet text and the tweet image text are concatenated at each spatial dimension. The rest of the architecture is the same as in SCM: two Inception-E blocks, dropout, average pooling and three fully connected layers until the classification layer. The number of tweet textual kernels $K_t$ and tweet image textual kernels $K_it$ is set to $K_t = 10$ and $K_it = 5$. The TKM architecture is illustrated in Fig. FIGREF29."
],
[
"We train the multimodal models with a Cross-Entropy loss with Softmax activations and an ADAM optimizer with an initial learning rate of $1e-4$. Our dataset suffers from a high class imbalance, so we weight the contribution to the loss of the samples to totally compensate for it. One of the goals of this work is to explore how every one of the inputs contributes to the classification and to prove that the proposed model can learn concurrences between visual and textual data useful to improve the hate speech classification results on multimodal data. To do that we train different models where all or only some inputs are available. When an input is not available, we set it to zeros, and we do the same when an image has no text."
],
[
"Table TABREF31 shows the F-score, the Area Under the ROC Curve (AUC) and the mean accuracy (ACC) of the proposed models when different inputs are available. $TT$ refers to the tweet text, $IT$ to the image text and $I$ to the image. It also shows results for the LSTM, for the Davison method proposed in BIBREF7 trained with MMHS150K, and for random scores. Fig. FIGREF32 shows the Precision vs Recall plot and the ROC curve (which plots the True Positive Rate vs the False Positive Rate) of the different models.",
"First, notice that given the subjectivity of the task and the discrepancies between annotators, getting optimal scores in the evaluation metrics is virtually impossible. However, a system with relatively low metric scores can still be very useful for hate speech detection in a real application: it will fire on publications for which most annotators agree they are hate, which are often the stronger attacks. The proposed LSTM to detect hate speech when only text is available, gets similar results as the method presented in BIBREF7, which we trained with MMHS150K and the same splits. However, more than substantially advancing the state of the art on hate speech detection in textual publications, our key purpose in this work is to introduce and work on its detection on multimodal publications. We use LSTM because it provides a strong representation of the tweet texts.",
"The FCM trained only with images gets decent results, considering that in many publications the images might not give any useful information for the task. Fig. FIGREF33 shows some representative examples of the top hate and not hate scored images of this model. Many hate tweets are accompanied by demeaning nudity images, being sexist or homophobic. Other racist tweets are accompanied by images caricaturing black people. Finally, MEMES are also typically used in hate speech publications. The top scored images for not hate are portraits of people belonging to minorities. This is due to the use of slur inside these communities without an offensive intention, such as the word nigga inside the afro-american community or the word dyke inside the lesbian community. These results show that images can be effectively used to discriminate between offensive and non-offensive uses of those words.",
"Despite the model trained only with images proves that they are useful for hate speech detection, the proposed multimodal models are not able to improve the detection compared to the textual models. Besides the different architectures, we have tried different training strategies, such as initializing the CNN weights with a model already trained solely with MMHS150K images or using dropout to force the multimodal models to use the visual information. Eventually, though, these models end up using almost only the text input for the prediction and producing very similar results to those of the textual models. The proposed multimodal models, such as TKM, have shown good performance in other tasks, such as VQA. Next, we analyze why they do not perform well in this task and with this data:",
"[noitemsep,leftmargin=*]",
"Noisy data. A major challenge of this task is the discrepancy between annotations due to subjective judgement. Although this affects also detection using only text, its repercussion is bigger in more complex tasks, such as detection using images or multimodal detection.",
"Complexity and diversity of multimodal relations. Hate speech multimodal publications employ a lot of background knowledge which makes the relations between visual and textual elements they use very complex and diverse, and therefore difficult to learn by a neural network.",
"Small set of multimodal examples. Fig. FIGREF5 shows some of the challenging multimodal hate examples that we aimed to detect. But although we have collected a big dataset of $150K$ tweets, the subset of multimodal hate there is still too small to learn the complex multimodal relations needed to identify multimodal hate."
],
[
"In this work we have explored the task of hate speech detection on multimodal publications. We have created MMHS150K, to our knowledge the biggest available hate speech dataset, and the first one composed of multimodal data, namely tweets formed by image and text. We have trained different textual, visual and multimodal models with that data, and found out that, despite the fact that images are useful for hate speech detection, the multimodal models do not outperform the textual models. Finally, we have analyzed the challenges of the proposed task and dataset. Given that most of the content in Social Media nowadays is multimodal, we truly believe on the importance of pushing forward this research. The code used in this work is available in ."
]
]
} | {
"question": [
"What models do they propose?",
"Are all tweets in English?",
"How large is the dataset?",
"What is the results of multimodal compared to unimodal models?",
"What is author's opinion on why current multimodal models cannot outperform models analyzing only text?",
"What metrics are used to benchmark the results?",
"How is data collected, manual collection or Twitter api?",
"How many tweats does MMHS150k contains, 150000?",
"What unimodal detection models were used?",
"What different models for multimodal detection were proposed?",
"What annotations are available in the dataset - tweat used hate speach or not?"
],
"question_id": [
"6976296126e4a5c518e6b57de70f8dc8d8fde292",
"53640834d68cf3b86cf735ca31f1c70aa0006b72",
"b2b0321b0aaf58c3aa9050906ade6ef35874c5c1",
"4e9684fd68a242cb354fa6961b0e3b5c35aae4b6",
"2e632eb5ad611bbd16174824de0ae5efe4892daf",
"d1ff6cba8c37e25ac6b261a25ea804d8e58e09c0",
"24c0f3d6170623385283dfda7f2b6ca2c7169238",
"21a9f1cddd7cb65d5d48ec4f33fe2221b2a8f62e",
"a0ef0633d8b4040bf7cdc5e254d8adf82c8eed5e",
"b0799e26152197aeb3aa3b11687a6cc9f6c31011",
"4ce4db7f277a06595014db181342f8cb5cb94626"
],
"nlp_background": [
"two",
"two",
"two",
"zero",
"zero",
"zero",
"zero",
"zero",
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Feature Concatenation Model (FCM)",
"Spatial Concatenation Model (SCM)",
"Textual Kernels Model (TKM)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The objective of this work is to build a hate speech detector that leverages both textual and visual data and detects hate speech publications based on the context given by both data modalities. To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models: a Feature Concatenation Model (FCM), a Spatial Concatenation Model (SCM) and a Textual Kernels Model (TKM). All of them are CNN+RNN models with three inputs: the tweet image, the tweet text and the text appearing in the image (if any)."
],
"highlighted_evidence": [
"To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models: a Feature Concatenation Model (FCM), a Spatial Concatenation Model (SCM) and a Textual Kernels Model (TKM)"
]
}
],
"annotation_id": [
"e759a4245a5ac52632d3fbc424192e9e72b16350"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"594bca16b30968bbe0e3b0f68318f1788f732491"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" $150,000$ tweets"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Existing hate speech datasets contain only textual data. Moreover, a reference benchmark does not exists. Most of the published datasets are crawled from Twitter and distributed as tweet IDs but, since Twitter removes reported user accounts, an important amount of their hate tweets is no longer accessible. We create a new manually annotated multimodal hate speech dataset formed by $150,000$ tweets, each one of them containing text and an image. We call the dataset MMHS150K, and made it available online . In this section, we explain the dataset creation steps."
],
"highlighted_evidence": [
"We create a new manually annotated multimodal hate speech dataset formed by $150,000$ tweets, each one of them containing text and an image. "
]
}
],
"annotation_id": [
"374b00290fe0a9a6f8f123d6dc04c1c2cb7ce619"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Unimodal LSTM vs Best Multimodal (FCM)\n- F score: 0.703 vs 0.704\n- AUC: 0.732 vs 0.734 \n- Mean Accuracy: 68.3 vs 68.4 ",
"evidence": [
"Table TABREF31 shows the F-score, the Area Under the ROC Curve (AUC) and the mean accuracy (ACC) of the proposed models when different inputs are available. $TT$ refers to the tweet text, $IT$ to the image text and $I$ to the image. It also shows results for the LSTM, for the Davison method proposed in BIBREF7 trained with MMHS150K, and for random scores. Fig. FIGREF32 shows the Precision vs Recall plot and the ROC curve (which plots the True Positive Rate vs the False Positive Rate) of the different models.",
"FLOAT SELECTED: Table 1. Performance of the proposed models, the LSTM and random scores. The Inputs column indicate which inputs are available at training and testing time."
],
"highlighted_evidence": [
"Table TABREF31 shows the F-score, the Area Under the ROC Curve (AUC) and the mean accuracy (ACC) of the proposed models when different inputs are available.",
"FLOAT SELECTED: Table 1. Performance of the proposed models, the LSTM and random scores. The Inputs column indicate which inputs are available at training and testing time."
]
}
],
"annotation_id": [
"01747abc86fa3933552919b030e74fc9d6515178"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Noisy data",
"Complexity and diversity of multimodal relations",
"Small set of multimodal examples"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Despite the model trained only with images proves that they are useful for hate speech detection, the proposed multimodal models are not able to improve the detection compared to the textual models. Besides the different architectures, we have tried different training strategies, such as initializing the CNN weights with a model already trained solely with MMHS150K images or using dropout to force the multimodal models to use the visual information. Eventually, though, these models end up using almost only the text input for the prediction and producing very similar results to those of the textual models. The proposed multimodal models, such as TKM, have shown good performance in other tasks, such as VQA. Next, we analyze why they do not perform well in this task and with this data:",
"[noitemsep,leftmargin=*]",
"Noisy data. A major challenge of this task is the discrepancy between annotations due to subjective judgement. Although this affects also detection using only text, its repercussion is bigger in more complex tasks, such as detection using images or multimodal detection.",
"Complexity and diversity of multimodal relations. Hate speech multimodal publications employ a lot of background knowledge which makes the relations between visual and textual elements they use very complex and diverse, and therefore difficult to learn by a neural network.",
"Small set of multimodal examples. Fig. FIGREF5 shows some of the challenging multimodal hate examples that we aimed to detect. But although we have collected a big dataset of $150K$ tweets, the subset of multimodal hate there is still too small to learn the complex multimodal relations needed to identify multimodal hate."
],
"highlighted_evidence": [
"Next, we analyze why they do not perform well in this task and with this data:\n\n[noitemsep,leftmargin=*]\n\nNoisy data. A major challenge of this task is the discrepancy between annotations due to subjective judgement. Although this affects also detection using only text, its repercussion is bigger in more complex tasks, such as detection using images or multimodal detection.\n\nComplexity and diversity of multimodal relations. Hate speech multimodal publications employ a lot of background knowledge which makes the relations between visual and textual elements they use very complex and diverse, and therefore difficult to learn by a neural network.\n\nSmall set of multimodal examples. Fig. FIGREF5 shows some of the challenging multimodal hate examples that we aimed to detect. But although we have collected a big dataset of $150K$ tweets, the subset of multimodal hate there is still too small to learn the complex multimodal relations needed to identify multimodal hate."
]
}
],
"annotation_id": [
"06bfc3c0173c2bf9e8f0e7a34d8857be185f1310"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"F-score",
"Area Under the ROC Curve (AUC)",
"mean accuracy (ACC)",
"Precision vs Recall plot",
"ROC curve (which plots the True Positive Rate vs the False Positive Rate)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Table TABREF31 shows the F-score, the Area Under the ROC Curve (AUC) and the mean accuracy (ACC) of the proposed models when different inputs are available. $TT$ refers to the tweet text, $IT$ to the image text and $I$ to the image. It also shows results for the LSTM, for the Davison method proposed in BIBREF7 trained with MMHS150K, and for random scores. Fig. FIGREF32 shows the Precision vs Recall plot and the ROC curve (which plots the True Positive Rate vs the False Positive Rate) of the different models."
],
"highlighted_evidence": [
"Table TABREF31 shows the F-score, the Area Under the ROC Curve (AUC) and the mean accuracy (ACC) of the proposed models when different inputs are available.",
"Fig. FIGREF32 shows the Precision vs Recall plot and the ROC curve (which plots the True Positive Rate vs the False Positive Rate) of the different models."
]
}
],
"annotation_id": [
"e054bc12188dfd93e3491fde76dc37247f91051d"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Twitter API"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We used the Twitter API to gather real-time tweets from September 2018 until February 2019, selecting the ones containing any of the 51 Hatebase terms that are more common in hate speech tweets, as studied in BIBREF9. We filtered out retweets, tweets containing less than three words and tweets containing porn related terms. From that selection, we kept the ones that included images and downloaded them. Twitter applies hate speech filters and other kinds of content control based on its policy, although the supervision is based on users' reports. Therefore, as we are gathering tweets from real-time posting, the content we get has not yet passed any filter."
],
"highlighted_evidence": [
"We used the Twitter API to gather real-time tweets from September 2018 until February 2019, selecting the ones containing any of the 51 Hatebase terms that are more common in hate speech tweets, as studied in BIBREF9."
]
}
],
"annotation_id": [
"e2962aa33290adc42fdac994cdf8f77b90532666"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"$150,000$ tweets"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Existing hate speech datasets contain only textual data. Moreover, a reference benchmark does not exists. Most of the published datasets are crawled from Twitter and distributed as tweet IDs but, since Twitter removes reported user accounts, an important amount of their hate tweets is no longer accessible. We create a new manually annotated multimodal hate speech dataset formed by $150,000$ tweets, each one of them containing text and an image. We call the dataset MMHS150K, and made it available online . In this section, we explain the dataset creation steps."
],
"highlighted_evidence": [
"We create a new manually annotated multimodal hate speech dataset formed by $150,000$ tweets, each one of them containing text and an image. We call the dataset MMHS150K, and made it available online . In this section, we explain the dataset creation steps."
]
}
],
"annotation_id": [
"8b3d8e719caa03403c1779308c410d875d34f065"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" single layer LSTM with a 150-dimensional hidden state for hate / not hate classification"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We train a single layer LSTM with a 150-dimensional hidden state for hate / not hate classification. The input dimensionality is set to 100 and GloVe BIBREF26 embeddings are used as word input representations. Since our dataset is not big enough to train a GloVe word embedding model, we used a pre-trained model that has been trained in two billion tweets. This ensures that the model will be able to produce word embeddings for slang and other words typically used in Twitter. To process the tweets text before generating the word embeddings, we use the same pipeline as the model authors, which includes generating symbols to encode Twitter special interactions such as user mentions (@user) or hashtags (#hashtag). To encode the tweet text and input it later to multimodal models, we use the LSTM hidden state after processing the last tweet word. Since the LSTM has been trained for hate speech classification, it extracts the most useful information for this task from the text, which is encoded in the hidden state after inputting the last tweet word."
],
"highlighted_evidence": [
"We train a single layer LSTM with a 150-dimensional hidden state for hate / not hate classification. The input dimensionality is set to 100 and GloVe BIBREF26 embeddings are used as word input representations."
]
}
],
"annotation_id": [
"fef9d96af320e166ea80854dd890bffc92143437"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Feature Concatenation Model (FCM)",
"Spatial Concatenation Model (SCM)",
"Textual Kernels Model (TKM)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The objective of this work is to build a hate speech detector that leverages both textual and visual data and detects hate speech publications based on the context given by both data modalities. To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models: a Feature Concatenation Model (FCM), a Spatial Concatenation Model (SCM) and a Textual Kernels Model (TKM). All of them are CNN+RNN models with three inputs: the tweet image, the tweet text and the text appearing in the image (if any)."
],
"highlighted_evidence": [
"To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models: a Feature Concatenation Model (FCM), a Spatial Concatenation Model (SCM) and a Textual Kernels Model (TKM)."
]
}
],
"annotation_id": [
"8c2edb685d8f82b80bc60d335a6b53a86b855bd1"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"No attacks to any community",
" racist",
"sexist",
"homophobic",
"religion based attacks",
"attacks to other communities"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We annotate the gathered tweets using the crowdsourcing platform Amazon Mechanical Turk. There, we give the workers the definition of hate speech and show some examples to make the task clearer. We then show the tweet text and image and we ask them to classify it in one of 6 categories: No attacks to any community, racist, sexist, homophobic, religion based attacks or attacks to other communities. Each one of the $150,000$ tweets is labeled by 3 different workers to palliate discrepancies among workers."
],
"highlighted_evidence": [
"We then show the tweet text and image and we ask them to classify it in one of 6 categories: No attacks to any community, racist, sexist, homophobic, religion based attacks or attacks to other communities."
]
}
],
"annotation_id": [
"c528a0f56b7aa65eeafa53dcc5747d171f526879"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Figure 1. Tweets from MMHS150K where the visual information adds relevant context for the hate speech detection task.",
"Figure 2. Percentage of tweets per class in MMHS150K.",
"Figure 3. Percentage of hate and not hate tweets for top keywords of MMHS150K.",
"Figure 4. FCM architecture. Image and text representations are concatenated and processed by a set of fully connected layers.",
"Figure 5. TKM architecture. Textual kernels are learnt from the text representations, and convolved with the image representation.",
"Table 1. Performance of the proposed models, the LSTM and random scores. The Inputs column indicate which inputs are available at training and testing time.",
"Figure 7. Top scored examples for hate (top) and for not hate (bottom) for the FCM model trained only with images.",
"Figure 6. Precision vs Recall (left) and ROC curve (True Positive Rate vs False Positive Rate) (right) plots of the proposed models trained with the different inputs, the LSTM and random scores."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"6-Figure4-1.png",
"6-Figure5-1.png",
"7-Table1-1.png",
"7-Figure7-1.png",
"7-Figure6-1.png"
]
} |
1701.00185 | Self-Taught Convolutional Neural Networks for Short Text Clustering | Short text clustering is a challenging problem due to its sparseness of text representation. Here we propose a flexible Self-Taught Convolutional neural network framework for Short Text Clustering (dubbed STC^2), which can flexibly and successfully incorporate more useful semantic features and learn non-biased deep text representation in an unsupervised manner. In our framework, the original raw text features are firstly embedded into compact binary codes by using one existing unsupervised dimensionality reduction methods. Then, word embeddings are explored and fed into convolutional neural networks to learn deep feature representations, meanwhile the output units are used to fit the pre-trained binary codes in the training process. Finally, we get the optimal clusters by employing K-means to cluster the learned representations. Extensive experimental results demonstrate that the proposed framework is effective, flexible and outperform several popular clustering methods when tested on three public short text datasets. | {
"section_name": [
"Introduction",
"Related Work",
"Short Text Clustering",
"Deep Neural Networks",
"Methodology",
"Deep Convolutional Neural Networks",
"Unsupervised Dimensionality Reduction",
"Learning",
"K-means for Clustering",
"Datasets",
"Pre-trained Word Vectors",
"Comparisons",
"Evaluation Metrics",
"Hyperparameter Settings",
"Results and Analysis",
"Conclusions",
"Acknowledgments"
],
"paragraphs": [
[
"Short text clustering is of great importance due to its various applications, such as user profiling BIBREF0 and recommendation BIBREF1 , for nowaday's social media dataset emerged day by day. However, short text clustering has the data sparsity problem and most words only occur once in each short text BIBREF2 . As a result, the Term Frequency-Inverse Document Frequency (TF-IDF) measure cannot work well in short text setting. In order to address this problem, some researchers work on expanding and enriching the context of data from Wikipedia BIBREF3 or an ontology BIBREF4 . However, these methods involve solid Natural Language Processing (NLP) knowledge and still use high-dimensional representation which may result in a waste of both memory and computation time. Another way to overcome these issues is to explore some sophisticated models to cluster short texts. For example, Yin and Wang BIBREF5 proposed a Dirichlet multinomial mixture model-based approach for short text clustering. Yet how to design an effective model is an open question, and most of these methods directly trained based on Bag-of-Words (BoW) are shallow structures which cannot preserve the accurate semantic similarities.",
"Recently, with the help of word embedding, neural networks demonstrate their great performance in terms of constructing text representation, such as Recursive Neural Network (RecNN) BIBREF6 , BIBREF7 and Recurrent Neural Network (RNN) BIBREF8 . However, RecNN exhibits high time complexity to construct the textual tree, and RNN, using the hidden layer computed at the last word to represent the text, is a biased model where later words are more dominant than earlier words BIBREF9 . Whereas for the non-biased models, the learned representation of one text can be extracted from all the words in the text with non-dominant learned weights. More recently, Convolution Neural Network (CNN), as the most popular non-biased model and applying convolutional filters to capture local features, has achieved a better performance in many NLP applications, such as sentence modeling BIBREF10 , relation classification BIBREF11 , and other traditional NLP tasks BIBREF12 . Most of the previous works focus CNN on solving supervised NLP tasks, while in this paper we aim to explore the power of CNN on one unsupervised NLP task, short text clustering.",
"We systematically introduce a simple yet surprisingly powerful Self-Taught Convolutional neural network framework for Short Text Clustering, called STC INLINEFORM0 . An overall architecture of our proposed approach is illustrated in Figure FIGREF5 . We, inspired by BIBREF13 , BIBREF14 , utilize a self-taught learning framework into our task. In particular, the original raw text features are first embedded into compact binary codes INLINEFORM1 with the help of one traditional unsupervised dimensionality reduction function. Then text matrix INLINEFORM2 projected from word embeddings are fed into CNN model to learn the deep feature representation INLINEFORM3 and the output units are used to fit the pre-trained binary codes INLINEFORM4 . After obtaining the learned features, K-means algorithm is employed on them to cluster texts into clusters INLINEFORM5 . Obviously, we call our approach “self-taught” because the CNN model is learnt from the pseudo labels generated from the previous stage, which is quite different from the term “self-taught” in BIBREF15 . Our main contributions can be summarized as follows:",
"This work is an extension of our conference paper BIBREF16 , and they differ in the following aspects. First, we put forward a general a self-taught CNN framework in this paper which can flexibly couple various semantic features, whereas the conference version can be seen as a specific example of this work. Second, in this paper we use a new short text dataset, Biomedical, in the experiment to verify the effectiveness of our approach. Third, we put much effort on studying the influence of various different semantic features integrated in our self-taught CNN framework, which is not involved in the conference paper.",
"For the purpose of reproducibility, we make the datasets and software used in our experiments publicly available at the website.",
"The remainder of this paper is organized as follows: In Section SECREF2 , we first briefly survey several related works. In Section SECREF3 , we describe the proposed approach STC INLINEFORM0 and implementation details. Experimental results and analyses are presented in Section SECREF4 . Finally, conclusions are given in the last Section."
],
[
"In this section, we review the related work from the following two perspectives: short text clustering and deep neural networks."
],
[
"There have been several studies that attempted to overcome the sparseness of short text representation. One way is to expand and enrich the context of data. For example, Banerjee et al. BIBREF3 proposed a method of improving the accuracy of short text clustering by enriching their representation with additional features from Wikipedia, and Fodeh et al. BIBREF4 incorporate semantic knowledge from an ontology into text clustering. However, these works need solid NLP knowledge and still use high-dimensional representation which may result in a waste of both memory and computation time. Another direction is to map the original features into reduced space, such as Latent Semantic Analysis (LSA) BIBREF17 , Laplacian Eigenmaps (LE) BIBREF18 , and Locality Preserving Indexing (LPI) BIBREF19 . Even some researchers explored some sophisticated models to cluster short texts. For example, Yin and Wang BIBREF5 proposed a Dirichlet multinomial mixture model-based approach for short text clustering. Moreover, some studies even focus the above both two streams. For example, Tang et al. BIBREF20 proposed a novel framework which enrich the text features by employing machine translation and reduce the original features simultaneously through matrix factorization techniques.",
"Despite the above clustering methods can alleviate sparseness of short text representation to some extent, most of them ignore word order in the text and belong to shallow structures which can not fully capture accurate semantic similarities."
],
[
"Recently, there is a revival of interest in DNN and many researchers have concentrated on using Deep Learning to learn features. Hinton and Salakhutdinov BIBREF21 use DAE to learn text representation. During the fine-tuning procedure, they use backpropagation to find codes that are good at reconstructing the word-count vector.",
"More recently, researchers propose to use external corpus to learn a distributed representation for each word, called word embedding BIBREF22 , to improve DNN performance on NLP tasks. The Skip-gram and continuous bag-of-words models of Word2vec BIBREF23 propose a simple single-layer architecture based on the inner product between two word vectors, and Pennington et al. BIBREF24 introduce a new model for word representation, called GloVe, which captures the global corpus statistics.",
"In order to learn the compact representation vectors of sentences, Le and Mikolov BIBREF25 directly extend the previous Word2vec BIBREF23 by predicting words in the sentence, which is named Paragraph Vector (Para2vec). Para2vec is still a shallow window-based method and need a larger corpus to yield better performance. More neural networks utilize word embedding to capture true meaningful syntactic and semantic regularities, such as RecNN BIBREF6 , BIBREF7 and RNN BIBREF8 . However, RecNN exhibits high time complexity to construct the textual tree, and RNN, using the layer computed at the last word to represent the text, is a biased model. Recently, Long Short-Term Memory (LSTM) BIBREF26 and Gated Recurrent Unit (GRU) BIBREF27 , as sophisticated recurrent hidden units of RNN, has presented its advantages in many sequence generation problem, such as machine translation BIBREF28 , speech recognition BIBREF29 , and text conversation BIBREF30 . While, CNN is better to learn non-biased implicit features which has been successfully exploited for many supervised NLP learning tasks as described in Section SECREF1 , and various CNN based variants are proposed in the recent works, such as Dynamic Convolutional Neural Network (DCNN) BIBREF10 , Gated Recursive Convolutional Neural Network (grConv) BIBREF31 and Self-Adaptive Hierarchical Sentence model (AdaSent) BIBREF32 .",
"In the past few days, Visin et al. BIBREF33 have attempted to replace convolutional layer in CNN to learn non-biased features for object recognition with four RNNs, called ReNet, that sweep over lower-layer features in different directions: (1) bottom to top, (2) top to bottom, (3) left to right and (4) right to left. However, ReNet does not outperform state-of-the-art convolutional neural networks on any of the three benchmark datasets, and it is also a supervised learning model for classification. Inspired by Skip-gram of word2vec BIBREF34 , BIBREF23 , Skip-thought model BIBREF35 describe an approach for unsupervised learning of a generic, distributed sentence encoder. Similar as Skip-gram model, Skip-thought model trains an encoder-decoder model that tries to reconstruct the surrounding sentences of an encoded sentence and released an off-the-shelf encoder to extract sentence representation. Even some researchers introduce continuous Skip-gram and negative sampling to CNN for learning visual representation in an unsupervised manner BIBREF36 . This paper, from a new perspective, puts forward a general self-taught CNN framework which can flexibly couple various semantic features and achieve a good performance on one unsupervised learning task, short text clustering."
],
[
"Assume that we are given a dataset of INLINEFORM0 training texts denoted as: INLINEFORM1 , where INLINEFORM2 is the dimensionality of the original BoW representation. Denote its tag set as INLINEFORM3 and the pre-trained word embedding set as INLINEFORM4 , where INLINEFORM5 is the dimensionality of word vectors and INLINEFORM6 is the vocabulary size. In order to learn the INLINEFORM7 -dimensional deep feature representation INLINEFORM8 from CNN in an unsupervised manner, some unsupervised dimensionality reduction methods INLINEFORM9 are employed to guide the learning of CNN model. Our goal is to cluster these texts INLINEFORM10 into clusters INLINEFORM11 based on the learned deep feature representation while preserving the semantic consistency.",
"As depicted in Figure FIGREF5 , the proposed framework consist of three components, deep convolutional neural network (CNN), unsupervised dimensionality reduction function and K-means module. In the rest sections, we first present the first two components respectively, and then give the trainable parameters and the objective function to learn the deep feature representation. Finally, the last section describe how to perform clustering on the learned features."
],
[
"In this section, we briefly review one popular deep convolutional neural network, Dynamic Convolutional Neural Network (DCNN) BIBREF10 as an instance of CNN in the following sections, which as the foundation of our proposed method has been successfully proposed for the completely supervised learning task, text classification.",
"Taking a neural network with two convolutional layers in Figure FIGREF9 as an example, the network transforms raw input text to a powerful representation. Particularly, each raw text vector INLINEFORM0 is projected into a matrix representation INLINEFORM1 by looking up a word embedding INLINEFORM2 , where INLINEFORM3 is the length of one text. We also let INLINEFORM4 and INLINEFORM5 denote the weights of the neural networks. The network defines a transformation INLINEFORM6 INLINEFORM7 which transforms an input raw text INLINEFORM8 to a INLINEFORM9 -dimensional deep representation INLINEFORM10 . There are three basic operations described as follows:",
"Wide one-dimensional convolution This operation INLINEFORM0 is applied to an individual row of the sentence matrix INLINEFORM1 , and yields a resulting matrix INLINEFORM2 , where INLINEFORM3 is the width of convolutional filter.",
"Folding In this operation, every two rows in a feature map are simply summed component-wisely. For a map of INLINEFORM0 rows, folding returns a map of INLINEFORM1 rows, thus halving the size of the representation and yielding a matrix feature INLINEFORM2 . Note that folding operation does not introduce any additional parameters.",
"Dynamic INLINEFORM0 -max pooling Assuming the pooling parameter as INLINEFORM1 , INLINEFORM2 -max pooling selects the sub-matrix INLINEFORM3 of the INLINEFORM4 highest values in each row of the matrix INLINEFORM5 . For dynamic INLINEFORM6 -max pooling, the pooling parameter INLINEFORM7 is dynamically selected in order to allow for a smooth extraction of higher-order and longer-range features BIBREF10 . Given a fixed pooling parameter INLINEFORM8 for the topmost convolutional layer, the parameter INLINEFORM9 of INLINEFORM10 -max pooling in the INLINEFORM11 -th convolutional layer can be computed as follows: DISPLAYFORM0 ",
"where INLINEFORM0 is the total number of convolutional layers in the network."
],
[
"As described in Figure FIGREF5 , the dimensionality reduction function is defined as follows: DISPLAYFORM0 ",
"where, INLINEFORM0 are the INLINEFORM1 -dimensional reduced latent space representations. Here, we take four popular dimensionality reduction methods as examples in our framework.",
"Average Embedding (AE): This method directly averages the word embeddings which are respectively weighted with TF and TF-IDF. Huang et al. BIBREF37 used this strategy as the global context in their task, and Socher et al. BIBREF7 and Lai et al. BIBREF9 used this method for text classification. The weighted average of all word vectors in one text can be computed as follows: DISPLAYFORM0 ",
"where INLINEFORM0 can be any weighting function that captures the importance of word INLINEFORM1 in the text INLINEFORM2 .",
"Latent Semantic Analysis (LSA): LSA BIBREF17 is the most popular global matrix factorization method, which applies a dimension reducing linear projection, Singular Value Decomposition (SVD), of the corresponding term/document matrix. Suppose the rank of INLINEFORM0 is INLINEFORM1 , LSA decompose INLINEFORM2 into the product of three other matrices: DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are the singular values of INLINEFORM2 , INLINEFORM3 is a set of left singular vectors and INLINEFORM4 is a set of right singular vectors. LSA uses the top INLINEFORM5 vectors in INLINEFORM6 as the transformation matrix to embed the original text features into a INLINEFORM7 -dimensional subspace INLINEFORM8 BIBREF17 .",
"Laplacian Eigenmaps (LE): The top eigenvectors of graph Laplacian, defined on the similarity matrix of texts, are used in the method, which can discover the manifold structure of the text space BIBREF18 . In order to avoid storing the dense similarity matrix, many approximation techniques are proposed to reduce the memory usage and computational complexity for LE. There are two representative approximation methods, sparse similarity matrix and Nystr INLINEFORM0 m approximation. Following previous studies BIBREF38 , BIBREF13 , we select the former technique to construct the INLINEFORM1 local similarity matrix INLINEFORM2 by using heat kernel as follows: DISPLAYFORM0 ",
"where, INLINEFORM0 is a tuning parameter (default is 1) and INLINEFORM1 represents the set of INLINEFORM2 -nearest-neighbors of INLINEFORM3 . By introducing a diagonal INLINEFORM4 matrix INLINEFORM5 whose entries are given by INLINEFORM6 , the graph Laplacian INLINEFORM7 can be computed by ( INLINEFORM8 ). The optimal INLINEFORM9 real-valued matrix INLINEFORM10 can be obtained by solving the following objective function: DISPLAYFORM0 ",
"where INLINEFORM0 is the trace function, INLINEFORM1 requires the different dimensions to be uncorrelated, and INLINEFORM2 requires each dimension to achieve equal probability as positive or negative).",
"Locality Preserving Indexing (LPI): This method extends LE to deal with unseen texts by approximating the linear function INLINEFORM0 BIBREF13 , and the subspace vectors are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the Riemannian manifold BIBREF19 . Similar as LE, we first construct the local similarity matrix INLINEFORM1 , then the graph Laplacian INLINEFORM2 can be computed by ( INLINEFORM3 ), where INLINEFORM4 measures the local density around INLINEFORM5 and is equal to INLINEFORM6 . Compute the eigenvectors INLINEFORM7 and eigenvalues INLINEFORM8 of the following generalized eigen-problem: DISPLAYFORM0 ",
"The mapping function INLINEFORM0 can be obtained and applied to the unseen data BIBREF38 .",
"All of the above methods claim a better performance in capturing semantic similarity between texts in the reduced latent space representation INLINEFORM0 than in the original representation INLINEFORM1 , while the performance of short text clustering can be further enhanced with the help of our framework, self-taught CNN."
],
[
"The last layer of CNN is an output layer as follows: DISPLAYFORM0 ",
"where, INLINEFORM0 is the deep feature representation, INLINEFORM1 is the output vector and INLINEFORM2 is weight matrix.",
"In order to incorporate the latent semantic features INLINEFORM0 , we first binary the real-valued vectors INLINEFORM1 to the binary codes INLINEFORM2 by setting the threshold to be the media vector INLINEFORM3 . Then, the output vector INLINEFORM4 is used to fit the binary codes INLINEFORM5 via INLINEFORM6 logistic operations as follows: DISPLAYFORM0 ",
"All parameters to be trained are defined as INLINEFORM0 . DISPLAYFORM0 ",
"Given the training text collection INLINEFORM0 , and the pre-trained binary codes INLINEFORM1 , the log likelihood of the parameters can be written down as follows: DISPLAYFORM0 ",
"Following the previous work BIBREF10 , we train the network with mini-batches by back-propagation and perform the gradient-based optimization using the Adagrad update rule BIBREF39 . For regularization, we employ dropout with 50% rate to the penultimate layer BIBREF10 , BIBREF40 ."
],
[
"With the given short texts, we first utilize the trained deep neural network to obtain the semantic representations INLINEFORM0 , and then employ traditional K-means algorithm to perform clustering."
],
[
"We test our proposed approach on three public short text datasets. The summary statistics and semantic topics of these datasets are described in Table TABREF24 and Table TABREF25 .",
"SearchSnippets. This dataset was selected from the results of web search transaction using predefined phrases of 8 different domains by Phan et al. BIBREF41 .",
"StackOverflow. We use the challenge data published in Kaggle.com. The raw dataset consists 3,370,528 samples through July 31st, 2012 to August 14, 2012. In our experiments, we randomly select 20,000 question titles from 20 different tags as in Table TABREF25 .",
"Biomedical. We use the challenge data published in BioASQ's official website. In our experiments, we randomly select 20, 000 paper titles from 20 different MeSH major topics as in Table TABREF25 . As described in Table TABREF24 , the max length of selected paper titles is 53.",
"For these datasets, we randomly select 10% of data as the development set. Since SearchSnippets has been pre-processed by Phan et al. BIBREF41 , we do not further process this dataset. In StackOverflow, texts contain lots of computer terminology, and symbols and capital letters are meaningful, thus we do not do any pre-processed procedures. For Biomedical, we remove the symbols and convert letters into lower case."
],
[
"We use the publicly available word2vec tool to train word embeddings, and the most parameters are set as same as Mikolov et al. BIBREF23 to train word vectors on Google News setting, except of vector dimensionality using 48 and minimize count using 5. For SearchSnippets, we train word vectors on Wikipedia dumps. For StackOverflow, we train word vectors on the whole corpus of the StackOverflow dataset described above which includes the question titles and post contents. For Biomedical, we train word vectors on all titles and abstracts of 2014 training articles. The coverage of these learned vectors on three datasets are listed in Table TABREF32 , and the words not present in the set of pre-trained words are initialized randomly."
],
[
"In our experiment, some widely used text clustering methods are compared with our approach. Besides K-means, Skip-thought Vectors, Recursive Neural Network and Paragraph Vector based clustering methods, four baseline clustering methods are directly based on the popular unsupervised dimensionality reduction methods as described in Section SECREF11 . We further compare our approach with some other non-biased neural networks, such as bidirectional RNN. More details are listed as follows:",
"K-means K-means BIBREF42 on original keyword features which are respectively weighted with term frequency (TF) and term frequency-inverse document frequency (TF-IDF).",
"Skip-thought Vectors (SkipVec) This baseline BIBREF35 gives an off-the-shelf encoder to produce highly generic sentence representations. The encoder is trained using a large collection of novels and provides three encoder modes, that are unidirectional encoder (SkipVec (Uni)) with 2,400 dimensions, bidirectional encoder (SkipVec (Bi)) with 2,400 dimensions and combined encoder (SkipVec (Combine)) with SkipVec (Uni) and SkipVec (Bi) of 2,400 dimensions each. K-means is employed on the these vector representations respectively.",
"Recursive Neural Network (RecNN) In BIBREF6 , the tree structure is firstly greedy approximated via unsupervised recursive autoencoder. Then, semi-supervised recursive autoencoders are used to capture the semantics of texts based on the predicted structure. In order to make this recursive-based method completely unsupervised, we remove the cross-entropy error in the second phrase to learn vector representation and subsequently employ K-means on the learned vectors of the top tree node and the average of all vectors in the tree.",
"Paragraph Vector (Para2vec) K-means on the fixed size feature vectors generated by Paragraph Vector (Para2vec) BIBREF25 which is an unsupervised method to learn distributed representation of words and paragraphs. In our experiments, we use the open source software released by Mesnil et al. BIBREF43 .",
"Average Embedding (AE) K-means on the weighted average vectors of the word embeddings which are respectively weighted with TF and TF-IDF. The dimension of average vectors is equal to and decided by the dimension of word vectors used in our experiments.",
"Latent Semantic Analysis (LSA) K-means on the reduced subspace vectors generated by Singular Value Decomposition (SVD) method. The dimension of subspace is default set to the number of clusters, we also iterate the dimensions ranging from 10:10:200 to get the best performance, that is 10 on SearchSnippets, 20 on StackOverflow and 20 on Biomedical in our experiments.",
"Laplacian Eigenmaps (LE) This baseline, using Laplacian Eigenmaps and subsequently employing K-means algorithm, is well known as spectral clustering BIBREF44 . The dimension of subspace is default set to the number of clusters BIBREF18 , BIBREF38 , we also iterate the dimensions ranging from 10:10:200 to get the best performance, that is 20 on SearchSnippets, 70 on StackOverflow and 30 on Biomedical in our experiments.",
"Locality Preserving Indexing (LPI) This baseline, projecting the texts into a lower dimensional semantic space, can discover both the geometric and discriminating structures of the original feature space BIBREF38 . The dimension of subspace is default set to the number of clusters BIBREF38 , we also iterate the dimensions ranging from 10:10:200 to get the best performance, that is 20 on SearchSnippets, 80 on StackOverflow and 30 on Biomedical in our experiments.",
"bidirectional RNN (bi-RNN) We replace the CNN model in our framework as in Figure FIGREF5 with some bi-RNN models. Particularly, LSTM and GRU units are used in the experiments. In order to generate the fixed-length document representation from the variable-length vector sequences, for both bi-LSTM and bi-GRU based clustering methods, we further utilize three pooling methods: last pooling (using the last hidden state), mean pooling and element-wise max pooling. These pooling methods are respectively used in the previous works BIBREF45 , BIBREF27 , BIBREF46 and BIBREF9 . For regularization, the training gradients of all parameters with an INLINEFORM0 2 norm larger than 40 are clipped to 40, as the previous work BIBREF47 ."
],
[
"The clustering performance is evaluated by comparing the clustering results of texts with the tags/labels provided by the text corpus. Two metrics, the accuracy (ACC) and the normalized mutual information metric (NMI), are used to measure the clustering performance BIBREF38 , BIBREF48 . Given a text INLINEFORM0 , let INLINEFORM1 and INLINEFORM2 be the obtained cluster label and the label provided by the corpus, respectively. Accuracy is defined as: DISPLAYFORM0 ",
"where, INLINEFORM0 is the total number of texts, INLINEFORM1 is the indicator function that equals one if INLINEFORM2 and equals zero otherwise, and INLINEFORM3 is the permutation mapping function that maps each cluster label INLINEFORM4 to the equivalent label from the text data by Hungarian algorithm BIBREF49 .",
"Normalized mutual information BIBREF50 between tag/label set INLINEFORM0 and cluster set INLINEFORM1 is a popular metric used for evaluating clustering tasks. It is defined as follows: DISPLAYFORM0 ",
"where, INLINEFORM0 is the mutual information between INLINEFORM1 and INLINEFORM2 , INLINEFORM3 is entropy and the denominator INLINEFORM4 is used for normalizing the mutual information to be in the range of [0, 1]."
],
[
"The most of parameters are set uniformly for these datasets. Following previous study BIBREF38 , the number of nearest neighbors in Eqn. ( EQREF15 ) is fixed to 15 when constructing the graph structures for LE and LPI. For CNN model, the networks has two convolutional layers. The widths of the convolutional filters are both 3. The value of INLINEFORM0 for the top INLINEFORM1 -max pooling in Eqn. ( EQREF10 ) is 5. The number of feature maps at the first convolutional layer is 12, and 8 feature maps at the second convolutional layer. Both those two convolutional layers are followed by a folding layer. We further set the dimension of word embeddings INLINEFORM2 as 48. Finally, the dimension of the deep feature representation INLINEFORM3 is fixed to 480. Moreover, we set the learning rate INLINEFORM4 as 0.01 and the mini-batch training size as 200. The output size INLINEFORM5 in Eqn. ( EQREF19 ) is set same as the best dimensions of subspace in the baseline method, as described in Section SECREF37 .",
"For initial centroids have significant impact on clustering results when utilizing the K-means algorithms, we repeat K-means for multiple times with random initial centroids (specifically, 100 times for statistical significance) as Huang BIBREF48 . The all subspace vectors are normalized to 1 before applying K-means and the final results reported are the average of 5 trials with all clustering methods on three text datasets."
],
[
"In Table TABREF43 and Table TABREF44 , we report the ACC and NMI performance of our proposed approaches and four baseline methods, K-means, SkipVec, RecNN and Para2vec based clustering methods. Intuitively, we get a general observation that (1) BoW based approaches, including K-means (TF) and K-means (TF-IDF), and SkipVec based approaches perform not well; (2) RecNN based approaches, both RecNN (Ave.) and RecNN (Top+Ave.), do better; (3) Para2vec makes a comparable performance with the most baselines; and (4) the evaluation clearly demonstrate the superiority of our proposed methods STC INLINEFORM0 . It is an expected results. For SkipVec based approaches, the off-the-shelf encoders are trained on the BookCorpus datasets BIBREF51 , and then applied to our datasets to extract the sentence representations. The SkipVec encoders can produce generic sentence representations but may not perform well for specific datasets, in our experiments, StackOverflow and Biomedical datasets consist of many computer terms and medical terms, such as “ASP.NET”, “XML”, “C#”, “serum” and “glycolytic”. When we take a more careful look, we find that RecNN (Top) does poorly, even worse than K-means (TF-IDF). The reason maybe that although recursive neural models introduce tree structure to capture compositional semantics, the vector of the top node mainly captures a biased semantic while the average of all vectors in the tree nodes, such as RecNN (Ave.), can be better to represent sentence level semantic. And we also get another observation that, although our proposed STC INLINEFORM1 -LE and STC INLINEFORM2 -LPI outperform both BoW based and RecNN based approaches across all three datasets, STC INLINEFORM3 -AE and STC INLINEFORM4 -LSA do just exhibit some similar performances as RecNN (Ave.) and RecNN (Top+Ave.) do in the datasets of StackOverflow and Biomedical.",
"We further replace the CNN model in our framework as in Figure FIGREF5 with some other non-biased models, such as bi-LSTM and bi-GRU, and report the results in Table TABREF46 and Table TABREF47 . As an instance, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models. From the results, we can see that bi-GRU and bi-LSTM based clustering methods do equally well, no clear winner, and both achieve great enhancements compared with LPI (best). Compared with these bi-LSTM/bi-GRU based models, the evaluation results still demonstrate the superiority of our approach methods, CNN based clustering model, in the most cases. As the results reported by Visin et al. BIBREF33 , despite bi-directional or multi-directional RNN models perform a good non-biased feature extraction, they yet do not outperform state-of-the-art CNN on some tasks.",
"In order to make clear what factors make our proposed method work, we report the bar chart results of ACC and MNI of our proposed methods and the corresponding baseline methods in Figure FIGREF49 and Figure FIGREF53 . It is clear that, although AE and LSA does well or even better than LE and LPI, especially in dataset of both StackOverflow and Biomedical, STC INLINEFORM0 -LE and STC INLINEFORM1 -LPI achieve a much larger performance enhancements than STC INLINEFORM2 -AE and STC INLINEFORM3 -LSA do. The possible reason is that the information the pseudo supervision used to guide the learning of CNN model that make difference. Especially, for AE case, the input features fed into CNN model and the pseudo supervision employed to guide the learning of CNN model are all come from word embeddings. There are no different semantic features to be used into our proposed method, thus the performance enhancements are limited in STC INLINEFORM4 -AE. For LSA case, as we known, LSA is to make matrix factorization to find the best subspace approximation of the original feature space to minimize the global reconstruction error. And as BIBREF24 , BIBREF52 recently point out that word embeddings trained with word2vec or some variances, is essentially to do an operation of matrix factorization. Therefore, the information between input and the pseudo supervision in CNN is not departed very largely from each other, and the performance enhancements of STC INLINEFORM5 -AE is also not quite satisfactory. For LE and LPI case, as we known that LE extracts the manifold structure of the original feature space, and LPI extracts both geometric and discriminating structure of the original feature space BIBREF38 . We guess that our approach STC INLINEFORM6 -LE and STC INLINEFORM7 -LPI achieve enhancements compared with both LE and LPI by a large margin, because both of LE and LPI get useful semantic features, and these features are also different from word embeddings used as input of CNN. From this view, we say that our proposed STC has potential to behave more effective when the pseudo supervision is able to get semantic meaningful features, which is different enough from the input of CNN.",
"Furthermore, from the results of K-means and AE in Table TABREF43 - TABREF44 and Figure FIGREF49 - FIGREF53 , we note that TF-IDF weighting gives a more remarkable improvement for K-means, while TF weighting works better than TF-IDF weighting for Average Embedding. Maybe the reason is that pre-trained word embeddings encode some useful information from external corpus and are able to get even better results without TF-IDF weighting. Meanwhile, we find that LE get quite unusual good performance than LPI, LSA and AE in SearchSnippets dataset, which is not found in the other two datasets. To get clear about this, and also to make a much better demonstration about our proposed approaches and other baselines, we further report 2-dimensional text embeddings on SearchSnippets in Figure FIGREF58 , using t-SNE BIBREF53 to get distributed stochastic neighbor embedding of the feature representations used in the clustering methods. We can see that the results of from AE and LSA seem to be fairly good or even better than the ones from LE and LPI, which is not the same as the results from ACC and NMI in Figure FIGREF49 - FIGREF53 . Meanwhile, RecNN (Ave.) performs better than BoW (both TF and TF-IDF) while RecNN (Top) does not, which is the same as the results from ACC and NMI in Table TABREF43 and Table TABREF44 . Then we guess that both ”the same as” and ”not the same as” above, is just a good example to illustrate that visualization tool, such as t-SNE, get some useful information for measuring results, which is different from the ones of ACC and NMI. Moreover, from this complementary view of t-SNE, we can see that our STC INLINEFORM0 -AE, STC INLINEFORM1 -LSA, STC INLINEFORM2 -LE, and STC INLINEFORM3 -LPI show more clear-cut margins among different semantic topics (that is, tags/labels), compared with AE, LSA, LE and LPI, respectively, as well as compared with both baselines, BoW and RecNN based ones.",
"From all these results, with three measures of ACC, NMI and t-SNE under three datasets, we can get a solid conclusion that our proposed approaches is an effective approaches to get useful semantic features for short text clustering."
],
[
"With the emergence of social media, short text clustering has become an increasing important task. This paper explores a new perspective to cluster short texts based on deep feature representation learned from the proposed self-taught convolutional neural networks. Our framework can be successfully accomplished without using any external tags/labels and complicated NLP pre-processing, and and our approach is a flexible framework, in which the traditional dimension reduction approaches could be used to get performance enhancement. Our extensive experimental study on three short text datasets shows that our approach can achieve a significantly better performance. In the future, how to select and incorporate more effective semantic features into the proposed framework would call for more research."
],
[
"We would like to thank reviewers for their comments, and acknowledge Kaggle and BioASQ for making the datasets available. This work is supported by the National Natural Science Foundation of China (No. 61602479, No. 61303172, No. 61403385) and the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB02070005)."
]
]
} | {
"question": [
"What were the evaluation metrics used?",
"What were their performance results?",
"By how much did they outperform the other methods?",
"Which popular clustering methods did they experiment with?",
"What datasets did they use?"
],
"question_id": [
"62a6382157d5f9c1dce6e6c24ac5994442053002",
"9e04730907ad728d62049f49ac828acb4e0a1a2a",
"5a0841cc0628e872fe473874694f4ab9411a1d10",
"a5dd569e6d641efa86d2c2b2e970ce5871e0963f",
"785c054f6ea04701f4ab260d064af7d124260ccc"
],
"nlp_background": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"search_query": [
"",
"",
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"accuracy",
"normalized mutual information"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The clustering performance is evaluated by comparing the clustering results of texts with the tags/labels provided by the text corpus. Two metrics, the accuracy (ACC) and the normalized mutual information metric (NMI), are used to measure the clustering performance BIBREF38 , BIBREF48 . Given a text INLINEFORM0 , let INLINEFORM1 and INLINEFORM2 be the obtained cluster label and the label provided by the corpus, respectively. Accuracy is defined as: DISPLAYFORM0"
],
"highlighted_evidence": [
"Two metrics, the accuracy (ACC) and the normalized mutual information metric (NMI), are used to measure the clustering performance BIBREF38 , BIBREF48 . "
]
}
],
"annotation_id": [
"ce1b6507ec3bde25d3bf800bb829aae3b20f8e02"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "On SearchSnippets dataset ACC 77.01%, NMI 62.94%, on StackOverflow dataset ACC 51.14%, NMI 49.08%, on Biomedical dataset ACC 43.00%, NMI 38.18%",
"evidence": [
"FLOAT SELECTED: Table 6: Comparison of ACC of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models.",
"FLOAT SELECTED: Table 7: Comparison of NMI of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 6: Comparison of ACC of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models.",
"FLOAT SELECTED: Table 7: Comparison of NMI of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models."
]
}
],
"annotation_id": [
"0a50b0b01688b81afa0e69e67c0d17fb4a0115bd"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "on SearchSnippets dataset by 6.72% in ACC, by 6.94% in NMI; on Biomedical dataset by 5.77% in ACC, 3.91% in NMI",
"evidence": [
"FLOAT SELECTED: Table 6: Comparison of ACC of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models.",
"FLOAT SELECTED: Table 7: Comparison of NMI of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 6: Comparison of ACC of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models.",
"FLOAT SELECTED: Table 7: Comparison of NMI of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models."
]
}
],
"annotation_id": [
"fd3954e5af3582cee36835e85c7a5efd5e121874"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"K-means, Skip-thought Vectors, Recursive Neural Network and Paragraph Vector based clustering methods"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In our experiment, some widely used text clustering methods are compared with our approach. Besides K-means, Skip-thought Vectors, Recursive Neural Network and Paragraph Vector based clustering methods, four baseline clustering methods are directly based on the popular unsupervised dimensionality reduction methods as described in Section SECREF11 . We further compare our approach with some other non-biased neural networks, such as bidirectional RNN. More details are listed as follows:"
],
"highlighted_evidence": [
"In our experiment, some widely used text clustering methods are compared with our approach. Besides K-means, Skip-thought Vectors, Recursive Neural Network and Paragraph Vector based clustering methods, four baseline clustering methods are directly based on the popular unsupervised dimensionality reduction methods as described in Section SECREF11 . "
]
}
],
"annotation_id": [
"019aab7aedefee06681de16eae65bd3031125b84"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"SearchSnippets",
"StackOverflow",
"Biomedical"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We test our proposed approach on three public short text datasets. The summary statistics and semantic topics of these datasets are described in Table TABREF24 and Table TABREF25 .",
"SearchSnippets. This dataset was selected from the results of web search transaction using predefined phrases of 8 different domains by Phan et al. BIBREF41 .",
"StackOverflow. We use the challenge data published in Kaggle.com. The raw dataset consists 3,370,528 samples through July 31st, 2012 to August 14, 2012. In our experiments, we randomly select 20,000 question titles from 20 different tags as in Table TABREF25 .",
"Biomedical. We use the challenge data published in BioASQ's official website. In our experiments, we randomly select 20, 000 paper titles from 20 different MeSH major topics as in Table TABREF25 . As described in Table TABREF24 , the max length of selected paper titles is 53."
],
"highlighted_evidence": [
"We test our proposed approach on three public short text datasets. ",
"SearchSnippets. This dataset was selected from the results of web search transaction using predefined phrases of 8 different domains by Phan et al. BIBREF41 .",
"StackOverflow. We use the challenge data published in Kaggle.com. ",
"Biomedical. We use the challenge data published in BioASQ's official website. "
]
}
],
"annotation_id": [
"0c80e649e7d54bf39704d39397af73f3b4847199"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
]
} | {
"caption": [
"Figure 1: The architecture of our proposed STC2 framework for short text clustering. Solid and hollow arrows represent forward and backward propagation directions of features and gradients respectively. The STC2 framework consist of deep convolutional neural network (CNN), unsupervised dimensionality reduction function and K-means module on the deep feature representation from the top hidden layers of CNN.",
"Figure 2: The architecture of dynamic convolutional neural network [11]. An input text is first projected to a matrix feature by looking up word embeddings, and then goes through wide convolutional layers, folding layers and k-max pooling layers, which provides a deep feature representation before the output layer.",
"Table 1: Statistics for the text datasets. C: the number of classes; Num: the dataset size; Len.: the mean/max length of texts and |V |: the vocabulary size.",
"Table 3: Coverage of word embeddings on three datasets. |V | is the vocabulary size and |T | is the number of tokens.",
"Table 6: Comparison of ACC of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models.",
"Table 7: Comparison of NMI of our proposed methods and some other non-biased models on three datasets. For LPI, we project the text under the best dimension as described in Section 4.3. For both bi-LSTM and bi-GRU based clustering methods, the binary codes generated from LPI are used to guide the learning of bi-LSTM/bi-GRU models.",
"Figure 3: ACC results on three short text datasets using our proposed STC2 based on AE, LSA, LE and LPI.",
"Figure 4: NMI results on three short text datasets using our proposed STC2 based on AE, LSA, LE and LPI.",
"Figure 5: A 2-dimensional embedding of original keyword features weighted with (a) TF and (b) TF-IDF, (c) vectors of the top tree node in RecNN, (d) average vectors of all tree node in RecNN, (e) average embeddings weighted with TF, subspace features based on (f) LSA, (g) LE and (h) LPI, deep learned features from (i) STC2-AE, (j) STC2-LSA, (k) STC2-LE and (l) STC2-LPI. All above features are respectively used in K-means (TF), K-means (TF-IDF), RecNN (Top), RecNN (Ave.), AE (TF), LSA(best), LE (best), LPI (best), and our proposed STC2-AE, STC2-LSA, STC2-LE and STC2-LPI on SearchSnippets. (Best viewed in color)"
],
"file": [
"5-Figure1-1.png",
"9-Figure2-1.png",
"14-Table1-1.png",
"16-Table3-1.png",
"22-Table6-1.png",
"23-Table7-1.png",
"24-Figure3-1.png",
"25-Figure4-1.png",
"27-Figure5-1.png"
]
} |
1912.00871 | Solving Arithmetic Word Problems Automatically Using Transformer and Unambiguous Representations | Constructing accurate and automatic solvers of math word problems has proven to be quite challenging. Prior attempts using machine learning have been trained on corpora specific to math word problems to produce arithmetic expressions in infix notation before answer computation. We find that custom-built neural networks have struggled to generalize well. This paper outlines the use of Transformer networks trained to translate math word problems to equivalent arithmetic expressions in infix, prefix, and postfix notations. In addition to training directly on domain-specific corpora, we use an approach that pre-trains on a general text corpus to provide foundational language abilities to explore if it improves performance. We compare results produced by a large number of neural configurations and find that most configurations outperform previously reported approaches on three of four datasets with significant increases in accuracy of over 20 percentage points. The best neural approaches boost accuracy by almost 10% on average when compared to the previous state of the art. | {
"section_name": [
"Introduction",
"Related Work",
"Approach",
"Approach ::: Data",
"Approach ::: Representation Conversion",
"Approach ::: Pre-training",
"Approach ::: Method: Training and Testing",
"Approach ::: Method: Training and Testing ::: Objective Function",
"Approach ::: Method: Training and Testing ::: Experiment 1: Representation",
"Approach ::: Method: Training and Testing ::: Experiment 2: State-of-the-art",
"Approach ::: Method: Training and Testing ::: Effect of Pre-training",
"Results",
"Results ::: Analysis",
"Results ::: Analysis ::: Error Analysis",
"Conclusions and Future Work",
"Acknowledgement"
],
"paragraphs": [
[
"Students are exposed to simple arithmetic word problems starting in elementary school, and most become proficient in solving them at a young age. Automatic solvers of such problems could potentially help educators, as well as become an integral part of general question answering services. However, it has been challenging to write programs to solve even such elementary school level problems well.",
"Solving a math word problem (MWP) starts with one or more sentences describing a transactional situation to be understood. The sentences are processed to produce an arithmetic expression, which is evaluated to provide an answer. Recent neural approaches to solving arithmetic word problems have used various flavors of recurrent neural networks (RNN) as well as reinforcement learning. Such methods have had difficulty achieving a high level of generalization. Often, systems extract the relevant numbers successfully but misplace them in the generated expressions. More problematic, they get the arithmetic operations wrong. The use of infix notation also requires pairs of parentheses to be placed and balanced correctly, bracketing the right numbers. There have been problems with parentheses placement as well.",
"Correctly extracting the numbers in the problem is necessary. Figure FIGREF1 gives examples of some infix representations that a machine learning solver can potentially produce from a simple word problem using the correct numbers. Of the expressions shown, only the first one is correct. After carefully observing expressions that actual problem solvers have generated, we want to explore if the use of infix notation may itself be a part of the problem because it requires the generation of additional characters, the open and close parentheses, which must be balanced and placed correctly.",
"The actual numbers appearing in MWPs vary widely from problem to problem. Real numbers take any conceivable value, making it almost impossible for a neural network to learn representations for them. As a result, trained programs sometimes generate expressions that have seemingly random numbers. For example, in some runs, a trained program could generate a potentially inexplicable expression such as $(25.01 - 4) * 9$ for the problem given in Figure FIGREF1, with one or more numbers not in the problem sentences. We hypothesize that replacing the numbers in the problem statement with generic tags like $\\rm \\langle n1 \\rangle $, $\\rm \\langle n2 \\rangle $, and $\\rm \\langle n3 \\rangle $ and saving their values as a pre-processing step, does not take away from the generality of the solution, but suppresses the problem of fertility in number generation leading to the introduction of numbers not present in the question sentences.",
"Another idea we want to test is whether a neural network which has been pre-trained to acquire language knowledge is better able to “understand\" the problem sentences. Pre-training with a large amount of arithmetic-related text is likely to help develop such knowledge, but due to the lack of large such focused corpora, we want to test whether pre-training with a sufficient general corpus is beneficial.",
"In this paper, we use the Transformer model BIBREF0 to solve arithmetic word problems as a particular case of machine translation from text to the language of arithmetic expressions. Transformers in various configurations have become a staple of NLP in the past two years. Past neural approaches did not treat this problem as pure translation like we do, and additionally, these approaches usually augmented the neural architectures with various external modules such as parse trees or used deep reinforcement learning, which we do not do. In this paper, we demonstrate that Transformers can be used to solve MWPs successfully with the simple adjustments we describe above. We compare performance on four individual datasets. In particular, we show that our translation-based approach outperforms state-of-the-art results reported by BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5 by a large margin on three of four datasets tested. On average, our best neural architecture outperforms previous results by almost 10%, although our approach is conceptually more straightforward.",
"We organize our paper as follows. The second section presents related work. Then, we discuss our approach. We follow by an analysis of experimental results and compare them to those of other recent approaches. We also discuss our successes and shortcomings. Finally, we share our concluding thoughts and end with our direction for future work."
],
[
"Past strategies have used rules and templates to match sentences to arithmetic expressions. Some such approaches seemed to solve problems impressively within a narrow domain, but performed poorly when out of domain, lacking generality BIBREF6, BIBREF7, BIBREF8, BIBREF9. Kushman et al. BIBREF3 used feature extraction and template-based categorization by representing equations as expression forests and finding a near match. Such methods required human intervention in the form of feature engineering and development of templates and rules, which is not desirable for expandability and adaptability. Hosseini et al. BIBREF2 performed statistical similarity analysis to obtain acceptable results, but did not perform well with texts that were dissimilar to training examples.",
"Existing approaches have used various forms of auxiliary information. Hosseini et al. BIBREF2 used verb categorization to identify important mathematical cues and contexts. Mitra and Baral BIBREF10 used predefined formulas to assist in matching. Koncel-Kedziorski et al. BIBREF11 parsed the input sentences, enumerated all parses, and learned to match, requiring expensive computations. Roy and Roth BIBREF12 performed searches for semantic trees over large spaces.",
"Some recent approaches have transitioned to using neural networks. Semantic parsing takes advantage of RNN architectures to parse MWPs directly into equations or expressions in a math-specific language BIBREF9, BIBREF13. RNNs have shown promising results, but they have had difficulties balancing parenthesis, and also, sometimes incorrectly choose numbers when generating equations. Rehman et al. BIBREF14 used POS tagging and classification of equation templates to produce systems of equations from third-grade level MWPs. Most recently, Sun et al. BIBREF13 used a Bi-Directional LSTM architecture for math word problems. Huang et al. BIBREF15 used a deep reinforcement learning model to achieve character placement in both seen and novel equation templates. Wang et al. BIBREF1 also used deep reinforcement learning."
],
[
"We view math word problem solving as a sequence-to-sequence translation problem. RNNs have excelled in sequence-to-sequence problems such as translation and question answering. The recent introduction of attention mechanisms has improved the performance of RNN models. Vaswani et al. BIBREF0 introduced the Transformer network, which uses stacks of attention layers instead of recurrence. Applications of Transformers have achieved state-of-the-art performance in many NLP tasks. We use this architecture to produce character sequences that are arithmetic expressions. The models we experiment with are easy and efficient to train, allowing us to test several configurations for a comprehensive comparison. We use several configurations of Transformer networks to learn the prefix, postfix, and infix notations of MWP equations independently.",
"Prefix and postfix representations of equations do not contain parentheses, which has been a source of confusion in some approaches. If the learned target sequences are simple, with fewer characters to generate, it is less likely to make mistakes during generation. Simple targets also may help the learning of the model to be more robust. Experimenting with all three representations for equivalent expressions may help us discover which one works best.",
"We train on standard datasets, which are readily available and commonly used. Our method considers the translation of English text to simple algebraic expressions. After performing experiments by training directly on math word problem corpora, we perform a different set of experiments by pre-training on a general language corpus. The success of pre-trained models such as ELMo BIBREF16, GPT-2 BIBREF17, and BERT BIBREF18 for many natural language tasks, provides reasoning that pre-training is likely to produce better learning by our system. We use pre-training so that the system has some foundational knowledge of English before we train it on the domain-specific text of math word problems. However, the output is not natural language but algebraic expressions, which is likely to limit the effectiveness of such pre-training."
],
[
"We work with four individual datasets. The datasets contain addition, subtraction, multiplication, and division word problems.",
"AI2 BIBREF2. AI2 is a collection of 395 addition and subtraction problems, containing numeric values, where some may not be relevant to the question.",
"CC BIBREF19. The Common Core dataset contains 600 2-step questions. The Cognitive Computation Group at the University of Pennsylvania gathered these questions.",
"IL BIBREF4. The Illinois dataset contains 562 1-step algebra word questions. The Cognitive Computation Group compiled these questions also.",
"MAWPS BIBREF20. MAWPS is a relatively large collection, primarily from other MWP datasets. We use 2,373 of 3,915 MWPs from this set. The problems not used were more complex problems that generate systems of equations. We exclude such problems because generating systems of equations is not our focus.",
"We take a randomly sampled 95% of examples from each dataset for training. From each dataset, MWPs not included in training make up the testing data used when generating our results. Training and testing are repeated three times, and reported results are an average of the three outcomes."
],
[
"We take a simple approach to convert infix expressions found in the MWPs to the other two representations. Two stacks are filled by iterating through string characters, one with operators found in the equation and the other with the operands. From these stacks, we form a binary tree structure. Traversing an expression tree in pre-order results in a prefix conversion. Post-order traversal gives us a postfix expression. Three versions of our training and testing data are created to correspond to each type of expression. By training on different representations, we expect our test results to change."
],
[
"We pre-train half of our networks to endow them with a foundational knowledge of English. Pre-training models on significant-sized language corpora have been a common approach recently. We explore the pre-training approach using a general English corpus because the language of MWPs is regular English, interspersed with numerical values. Ideally, the corpus for pre-training should be a very general and comprehensive corpus like an English Wikipedia dump or many gigabytes of human-generated text scraped from the internet like GPT-2 BIBREF21 used. However, in this paper, we want to perform experiments to see if pre-training with a smaller corpus can help. In particular, for this task, we use the IMDb Movie Reviews dataset BIBREF22. This set contains 314,041 unique sentences. Since movie reviewers wrote this data, it is a reference to natural language not related to arithmetic. Training on a much bigger and general corpus may make the language model stronger, but we leave this for future work.",
"We compare pre-trained models to non-pre-trained models to observe performance differences. Our pre-trained models are trained in an unsupervised fashion to improve the encodings of our fine-tuned solvers. In the pre-training process, we use sentences from the IMDb reviews with a target output of an empty string. We leave the input unlabelled, which focuses the network on adjusting encodings while providing unbiased decoding when we later change from IMDb English text to MWP-Data."
],
[
"The input sequence is a natural language specification of an arithmetic word problem. The MWP questions and equations have been encoded using the subword text encoder provided by the TensorFlow Datasets library. The output is an expression in prefix, infix, or postfix notation, which then can be manipulated further and solved to obtain a final answer.",
"All examples in the datasets contain numbers, some of which are unique or rare in the corpus. Rare terms are adverse for generalization since the network is unlikely to form good representations for them. As a remedy to this issue, our networks do not consider any relevant numbers during training. Before the networks attempt any translation, we pre-process each question and expression by a number mapping algorithm. This algorithm replaces each numeric value with a corresponding identifier (e.g., $\\langle n1 \\rangle $, $\\langle n2 \\rangle $, etc.), and remembers the necessary mapping. We expect that this approach may significantly improve how networks interpret each question. When translating, the numbers in the original question are tagged and cached. From the encoded English and tags, a predicted sequence resembling an expression presents itself as output. Since each network's learned output resembles an arithmetic expression (e.g., $\\langle n1 \\rangle + \\langle n2 \\rangle * \\langle n3 \\rangle $), we use the cached tag mapping to replace the tags with the corresponding numbers and return a final mathematical expression.",
"Three representation models are trained and tested separately: Prefix-Transformer, Postfix-Transformer, and Infix-Transformer. For each experiment, we use representation-specific Transformer architectures. Each model uses the Adam optimizer with $beta_1=0.95$ and $beta_2=0.99$ with a standard epsilon of $1 \\times e^{-9}$. The learning rate is reduced automatically in each training session as the loss decreases. Throughout the training, each model respects a 10% dropout rate. We employ a batch size of 128 for all training. Each model is trained on MWP data for 300 iterations before testing. The networks are trained on a machine using 1 Nvidia 1080 Ti graphics processing unit (GPU).",
"We compare medium-sized, small, and minimal networks to show if network size can be reduced to increase training and testing efficiency while retaining high accuracy. Networks over six layers have shown to be non-effective for this task. We tried many configurations of our network models, but report results with only three configurations of Transformers.",
"Transformer Type 1: This network is a small to medium-sized network consisting of 4 Transformer layers. Each layer utilizes 8 attention heads with a depth of 512 and a feed-forward depth of 1024.",
"Transformer Type 2: The second model is small in size, using 2 Transformer layers. The layers utilize 8 attention heads with a depth of 256 and a feed-forward depth of 1024.",
"Transformer Type 3: The third type of model is minimal, using only 1 Transformer layer. This network utilizes 8 attention heads with a depth of 256 and a feed-forward depth of 512."
],
[
"We calculate the loss in training according to a mean of the sparse categorical cross-entropy formula. Sparse categorical cross-entropy BIBREF23 is used for identifying classes from a feature set, which assumes a large target classification set. Evaluation between the possible translation classes (all vocabulary subword tokens) and the produced class (predicted token) is the metric of performance here. During each evaluation, target terms are masked, predicted, and then compared to the masked (known) value. We adjust the model's loss according to the mean of the translation accuracy after predicting every determined subword in a translation.",
"where $K = |Translation \\; Classes|$, $J = |Translation|$, and $I$ is the number of examples."
],
[
"Some of the problems encountered by prior approaches seem to be attributable to the use of infix notation. In this experiment, we compare translation BLEU-2 scores to spot the differences in representation interpretability. Traditionally, a BLEU score is a metric of translation quality BIBREF24. Our presented BLEU scores represent an average of scores a given model received over each of the target test sets. We use a standard bi-gram weight to show how accurate translations are within a window of two adjacent terms. After testing translations, we calculate an average BLEU-2 score per test set, which is related to the success over that data. An average of the scores for each dataset become the presented value.",
"where $N$ is the number of test datasets, which is 4."
],
[
"This experiment compares our networks to recent previous work. We count a given test score by a simple “correct versus incorrect\" method. The answer to an expression directly ties to all of the translation terms being correct, which is why we do not consider partial precision. We compare average accuracies over 3 test trials on different randomly sampled test sets from each MWP dataset. This calculation more accurately depicts the generalization of our networks."
],
[
"We also explore the effect of language pre-training, as discussed earlier. This training occurs over 30 iterations, at the start of the two experiments, to introduce a good level of language understanding before training on the MWP data. The same Transformer architectures are also trained solely on the MWP data. We calculate the reported results as:",
"where $R$ is the number of test repetitions, which is 3; $N$ is the number of test datasets, which is 4; $P$ is the number of MWPs, and $C$ is the number of correct equation translations."
],
[
"We now present the results of our various experiments. We compare the three representations of target equations and three architectures of the Transformer model in each test.",
"Results of Experiment 1 are given in Table TABREF21. For clarity, the number in parentheses in front of a row is the Transformer type. By using BLEU scores, we assess the translation capability of each network. This test displays how networks transform different math representations to a character summary level.",
"We compare by average BLEU-2 accuracy among our tests in the Average column of Table TABREF21 to communicate these translation differences. To make it easier to understand the results, Table TABREF22 provides a summary of Table TABREF21.",
"Looking at Tables TABREF21 and TABREF22, we note that both the prefix and postfix representations of our target language perform better than the generally used infix notation. The non-pre-trained models perform slightly better than the pre-trained models, and the small or Type 2 models perform slightly better than the minimal-sized and medium-sized Transformer models. The non-pre-trained type 2 prefix Transformer arrangement produced the most consistent translations.",
"Table TABREF23 provides detailed results of Experiment 2. The numbers are absolute accuracies, i.e., they correspond to cases where the arithmetic expression generated is 100% correct, leading to the correct numeric answer. Results by BIBREF1, BIBREF2, BIBREF4, BIBREF5 are sparse but indicate the scale of success compared to recent past approaches. Prefix, postfix, and infix representations in Table TABREF23 show that network capabilities are changed by how teachable the target data is. The values in the last column of Table TABREF23 are summarized in Table TABREF24. How the models compare with respect to accuracy closely resembles the comparison of BLEU scores, presented earlier. Thus, BLEU scores seem to correlate well with accuracy values in our case.",
"While our networks fell short of BIBREF1 AI2 testing accuracy, we present state-of-the-art results for the remaining three datasets. The AI2 dataset is tricky because it has numeric values in the word descriptions that are extraneous or irrelevant to the actual computation, whereas the other datasets have only relevant numeric values. The type 2 postfix Transformer received the highest testing average of 87.2%.",
"Our attempt at language pre-training fell short of our expectations in all but one tested dataset. We had hoped that more stable language understanding would improve results in general. As previously mentioned, using more general and comprehensive corpora of language could help grow semantic ability."
],
[
"All of the network configurations used were very successful for our task. The prefix representation overall provides the most stable network performance. To display the capability of our most successful model (type 2 postfix Transformer), we present some outputs of the network in Figure FIGREF26.",
"The models respect the syntax of math expressions, even when incorrect. For the majority of questions, our translators were able to determine operators based solely on the context of language.",
"Our pre-training was unsuccessful in improving accuracy, even when applied to networks larger than those reported. We may need to use more inclusive language, or pre-train on very math specific texts to be successful. Our results support our thesis of infix limitation."
],
[
"Our system, while performing above standard, could still benefit from some improvements. One issue originates from the algorithmic pre-processing of our questions and expressions. In Figure FIGREF27 we show an example of one such issue. The excerpt comes from a type 3 non-pre-trained Transformer test. The example shows an overlooked identifier, $\\langle n1 \\rangle $. The issue is attributed to the identifier algorithm only considering numbers in the problem. Observe in the question that the word “eight\" is the number we expect to relate to $\\langle n2 \\rangle $. Our identifying algorithm could be improved by considering such number words and performing conversion to a numerical value. If our algorithm performed as expected, the identifier $\\langle n1 \\rangle $ relates with 4 (the first occurring number in the question) and $\\langle n2 \\rangle $ with 8 (the converted number word appearing second in the question). The overall translation was incorrect whether or not our algorithm was successful, but it is essential to analyze problems like these that may result in future improvements. Had all questions been tagged correctly, our performance would have likely improved."
],
[
"In this paper, we have shown that the use of Transformer networks improves automatic math word problem-solving. We have also shown that the use of postfix target expressions performs better than the other two expression formats. Our improvements are well-motivated but straightforward and easy to use, demonstrating that the well-acclaimed Transformer architecture for language processing can handle MWPs well, obviating the need to build specialized neural architectures for this task.",
"Extensive pre-training over much larger corpora of language has extended the capabilities of many neural approaches. For example, networks like BERT BIBREF18, trained extensively on data from Wikipedia, perform relatively better in many tasks. Pre-training on a much larger corpus remains an extension we would like to try.",
"We want to work with more complex MWP datasets. Our datasets contain basic arithmetic expressions of +, -, * and /, and only up to 3 of them. For example, datasets such as Dolphin18k BIBREF25, consisting of web-answered questions from Yahoo! Answers, require a wider variety of arithmetic operators to be understood by the system.",
"We have noticed that the presence of irrelevant numbers in the sentences for MWPs limits our performance. We can think of such numbers as a sort of adversarial threat to an MWP solver that stress-test it. It may be interesting to explore how to keep a network's performance high, even in such cases.",
"With a hope to further advance this area of research and heighten interests, all of the code and data used is available on GitHub."
],
[
"The National Science Foundation supports the work reported in this paper under Grant No. 1659788. Any opinions, findings any conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of the National Science Foundation."
]
]
} | {
"question": [
"Does pre-training on general text corpus improve performance?",
"What neural configurations are explored?",
"Are the Transformers masked?",
"How is this problem evaluated?",
"What datasets do they use?"
],
"question_id": [
"3f6610d1d68c62eddc2150c460bf1b48a064e5e6",
"4c854d33a832f3f729ce73b206ff90677e131e48",
"163c15da1aa0ba370a00c5a09294cd2ccdb4b96d",
"90dd5c0f5084a045fd6346469bc853c33622908f",
"095888f6e10080a958d9cd3f779a339498f3a109"
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
"",
""
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"Our attempt at language pre-training fell short of our expectations in all but one tested dataset. We had hoped that more stable language understanding would improve results in general. As previously mentioned, using more general and comprehensive corpora of language could help grow semantic ability.",
"Our pre-training was unsuccessful in improving accuracy, even when applied to networks larger than those reported. We may need to use more inclusive language, or pre-train on very math specific texts to be successful. Our results support our thesis of infix limitation."
],
"highlighted_evidence": [
"Our attempt at language pre-training fell short of our expectations in all but one tested dataset.",
"Our pre-training was unsuccessful in improving accuracy, even when applied to networks larger than those reported."
]
}
],
"annotation_id": [
"2fc208edc557c6b89e47e4444a324315476299cf"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"tried many configurations of our network models, but report results with only three configurations",
"Transformer Type 1",
"Transformer Type 2",
"Transformer Type 3"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We compare medium-sized, small, and minimal networks to show if network size can be reduced to increase training and testing efficiency while retaining high accuracy. Networks over six layers have shown to be non-effective for this task. We tried many configurations of our network models, but report results with only three configurations of Transformers.",
"Transformer Type 1: This network is a small to medium-sized network consisting of 4 Transformer layers. Each layer utilizes 8 attention heads with a depth of 512 and a feed-forward depth of 1024.",
"Transformer Type 2: The second model is small in size, using 2 Transformer layers. The layers utilize 8 attention heads with a depth of 256 and a feed-forward depth of 1024.",
"Transformer Type 3: The third type of model is minimal, using only 1 Transformer layer. This network utilizes 8 attention heads with a depth of 256 and a feed-forward depth of 512."
],
"highlighted_evidence": [
"We tried many configurations of our network models, but report results with only three configurations of Transformers.\n\nTransformer Type 1: This network is a small to medium-sized network consisting of 4 Transformer layers. Each layer utilizes 8 attention heads with a depth of 512 and a feed-forward depth of 1024.\n\nTransformer Type 2: The second model is small in size, using 2 Transformer layers. The layers utilize 8 attention heads with a depth of 256 and a feed-forward depth of 1024.\n\nTransformer Type 3: The third type of model is minimal, using only 1 Transformer layer. This network utilizes 8 attention heads with a depth of 256 and a feed-forward depth of 512."
]
}
],
"annotation_id": [
"d5e7e66432c587d5761a4da08aa684c24ba68f22"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"We calculate the loss in training according to a mean of the sparse categorical cross-entropy formula. Sparse categorical cross-entropy BIBREF23 is used for identifying classes from a feature set, which assumes a large target classification set. Evaluation between the possible translation classes (all vocabulary subword tokens) and the produced class (predicted token) is the metric of performance here. During each evaluation, target terms are masked, predicted, and then compared to the masked (known) value. We adjust the model's loss according to the mean of the translation accuracy after predicting every determined subword in a translation."
],
"highlighted_evidence": [
"During each evaluation, target terms are masked, predicted, and then compared to the masked (known) value."
]
}
],
"annotation_id": [
"2dd2a1a90031db8cca22a096b3bc11fa39b5b03b"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"BLEU-2",
"average accuracies over 3 test trials on different randomly sampled test sets"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Approach ::: Method: Training and Testing ::: Experiment 1: Representation",
"Some of the problems encountered by prior approaches seem to be attributable to the use of infix notation. In this experiment, we compare translation BLEU-2 scores to spot the differences in representation interpretability. Traditionally, a BLEU score is a metric of translation quality BIBREF24. Our presented BLEU scores represent an average of scores a given model received over each of the target test sets. We use a standard bi-gram weight to show how accurate translations are within a window of two adjacent terms. After testing translations, we calculate an average BLEU-2 score per test set, which is related to the success over that data. An average of the scores for each dataset become the presented value.",
"Approach ::: Method: Training and Testing ::: Experiment 2: State-of-the-art",
"This experiment compares our networks to recent previous work. We count a given test score by a simple “correct versus incorrect\" method. The answer to an expression directly ties to all of the translation terms being correct, which is why we do not consider partial precision. We compare average accuracies over 3 test trials on different randomly sampled test sets from each MWP dataset. This calculation more accurately depicts the generalization of our networks."
],
"highlighted_evidence": [
"Approach ::: Method: Training and Testing ::: Experiment 1: Representation\nSome of the problems encountered by prior approaches seem to be attributable to the use of infix notation. In this experiment, we compare translation BLEU-2 scores to spot the differences in representation interpretability.",
"Approach ::: Method: Training and Testing ::: Experiment 2: State-of-the-art\nThis experiment compares our networks to recent previous work. We count a given test score by a simple “correct versus incorrect\" method. The answer to an expression directly ties to all of the translation terms being correct, which is why we do not consider partial precision. We compare average accuracies over 3 test trials on different randomly sampled test sets from each MWP dataset."
]
}
],
"annotation_id": [
"01a5c12023afb546469882b0ddba8a8d79c23c72"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"AI2 BIBREF2",
"CC BIBREF19",
"IL BIBREF4",
"MAWPS BIBREF20"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We work with four individual datasets. The datasets contain addition, subtraction, multiplication, and division word problems.",
"AI2 BIBREF2. AI2 is a collection of 395 addition and subtraction problems, containing numeric values, where some may not be relevant to the question.",
"CC BIBREF19. The Common Core dataset contains 600 2-step questions. The Cognitive Computation Group at the University of Pennsylvania gathered these questions.",
"IL BIBREF4. The Illinois dataset contains 562 1-step algebra word questions. The Cognitive Computation Group compiled these questions also.",
"MAWPS BIBREF20. MAWPS is a relatively large collection, primarily from other MWP datasets. We use 2,373 of 3,915 MWPs from this set. The problems not used were more complex problems that generate systems of equations. We exclude such problems because generating systems of equations is not our focus."
],
"highlighted_evidence": [
"We work with four individual datasets. The datasets contain addition, subtraction, multiplication, and division word problems.\n\nAI2 BIBREF2. AI2 is a collection of 395 addition and subtraction problems, containing numeric values, where some may not be relevant to the question.\n\nCC BIBREF19. The Common Core dataset contains 600 2-step questions. The Cognitive Computation Group at the University of Pennsylvania gathered these questions.\n\nIL BIBREF4. The Illinois dataset contains 562 1-step algebra word questions. The Cognitive Computation Group compiled these questions also.\n\nMAWPS BIBREF20. MAWPS is a relatively large collection, primarily from other MWP datasets. We use 2,373 of 3,915 MWPs from this set."
]
}
],
"annotation_id": [
"ce8861ebd330c73a2d5f27050a9cc9eb55af2892"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"TABLE I BLEU-2 COMPARISON FOR EXPERIMENT 1.",
"TABLE II SUMMARY OF BLEU SCORES FROM TABLE I.",
"TABLE III TEST RESULTS FOR EXPERIMENT 2 (* DENOTES AVERAGES ON PRESENT VALUES ONLY).",
"TABLE IV SUMMARY OF ACCURACIES FROM TABLE III."
],
"file": [
"4-TableI-1.png",
"4-TableII-1.png",
"5-TableIII-1.png",
"5-TableIV-1.png"
]
} |
1912.03234 | What Do You Mean I'm Funny? Personalizing the Joke Skill of a Voice-Controlled Virtual Assistant | A considerable part of the success experienced by Voice-controlled virtual assistants (VVA) is due to the emotional and personalized experience they deliver, with humor being a key component in providing an engaging interaction. In this paper we describe methods used to improve the joke skill of a VVA through personalization. The first method, based on traditional NLP techniques, is robust and scalable. The others combine self-attentional network and multi-task learning to obtain better results, at the cost of added complexity. A significant challenge facing these systems is the lack of explicit user feedback needed to provide labels for the models. Instead, we explore the use of two implicit feedback-based labelling strategies. All models were evaluated on real production data. Online results show that models trained on any of the considered labels outperform a heuristic method, presenting a positive real-world impact on user satisfaction. Offline results suggest that the deep-learning approaches can improve the joke experience with respect to the other considered methods. | {
"section_name": [
"Introduction",
"Method ::: Labelling Strategies",
"Method ::: Features",
"Method ::: NLP-based: LR-Model",
"Method ::: Deep-Learning-based: DL-Models",
"Validation",
"Validation ::: Online Results: A/B Testing",
"Validation ::: Offline Results",
"Conclusions and Future Work"
],
"paragraphs": [
[
"Voice-controlled virtual assistants (VVA) such as Siri and Alexa have experienced an exponential growth in terms of number of users and provided capabilities. They are used by millions for a variety of tasks including shopping, playing music, and even telling jokes. Arguably, their success is due in part to the emotional and personalized experience they provide. One important aspect of this emotional interaction is humor, a fundamental element of communication. Not only can it create in the user a sense of personality, but also be used as fallback technique for out-of-domain queries BIBREF0. Usually, a VVA's humorous responses are invoked by users with the phrase \"Tell me a joke\". In order to improve the joke experience and overall user satisfaction with a VVA, we propose to personalize the response to each request. To achieve this, a method should be able to recognize and evaluate humor, a challenging task that has been the focus of extensive work. Some authors have applied traditional NLP techniques BIBREF1, while others deep learning models BIBREF2. Moreover, BIBREF3 follows a semantic-based approach, while BIBREF4 and BIBREF5 tackle the challenge from a cognitive and linguistic perspective respectively.",
"To this end, we have developed two methods. The first one is based on traditional NLP techniques. Although relatively simple, it is robust, scalable, and has low latency, a fundamental property for real-time VVA systems. The other approaches combine multi-task learning BIBREF6 and self-attentional networks BIBREF7 to obtain better results, at the cost of added complexity. Both BERT BIBREF8 and an adapted transformer BIBREF7 architecture are considered. This choice of architecture was motivated by the advantages it presents over traditional RNN and CNN models, including better performance BIBREF9, faster training/inference (important for real-time systems), and better sense disambiguation BIBREF10 (an important component of computational humor BIBREF3).",
"The proposed models use binary classifiers to perform point-wise ranking, and therefore require a labelled dataset. To generate it, we explore two implicit user-feedback labelling strategies: five-minute reuse and one-day return. Online A/B testing is used to determine if these labelling strategies are suited to optimize the desired user-satisfaction metrics, and offline data to evaluated and compared the system's performance."
],
[
"Generating labels for this VVA skill is challenging. Label generation through explicit user feedback is unavailable since asking users for feedback creates friction and degrade the user experience. In addition, available humor datasets such as BIBREF3, BIBREF11 only contain jokes and corresponding labels, but not the additional features we need to personalize the jokes.",
"To overcome this difficulty, it is common to resort to implicit feedback. In particular, many VVA applications use interruptions as negative labels, the rationale being that unhappy users will stop the VVA. This strategy, however, is not suitable for our use-case since responses are short and users need to hear the entire joke to decide if it is funny. Instead, we explore two other implicit feedback labelling strategies: five-minute reuse and 1-day return. Five-minute reuse labels an instance positive if it was followed by a new joke request within five-minutes. Conversely, 1-day return marks as positive all joke requests that were followed by a new one within the following 1 to 25-hour interval. Both strategies assume that if a user returns, he is happy with the jokes. This is clearly an approximation, since a returning user might be overall satisfied with the experience, but not with all the jokes. The same is true for the implied negatives; the user might have been satisfied with some or all of the jokes. Therefore, these labels are noisy and only provide weak supervision to the models.",
"Table TABREF2 shows an example of the labels' values for a set of joke requests from one user."
],
[
"All models have access to the same raw features, which we conceptually separate into user, item and contextual features. Examples of features in each of these categories are shown in Table TABREF4. Some of these are used directly by the models, while others need to be pre-processed. The manner in which each model consumes them is explained next."
],
[
"To favor simplicity over accuracy, a logistic regression (LR) model is first proposed. Significant effort was put into finding expressive features. Categorical features are one-hot encoded and numerical ones are normalized. The raw Joke Text and Timestamp features require special treatment. The Joke Text is tokenized and the stop-words are removed. We can then compute computational humor features on the clean text such as sense combination BIBREF3 and ambiguity BIBREF12. In addition, since many jokes in our corpus are related to specific events (Christmas, etc), we check for keywords that relate the jokes to them. For example, if \"Santa\" is included, we infer it is a Christmas joke. Finally, pre-computed word embeddings with sub-word information are used to represent jokes by taking the average and maximum vectors over the token representations. Sub-word information is important when encoding jokes since many can contain out-of-vocabulary tokens. The joke's vector representations are also used to compute a summarized view of the user's past liked and disliked jokes. We consider that a user liked a joke when the assigned label is 1, an approximation given the noisy nature of the labels. The user's liked/disliked joke vectors are also combined with the candidate joke vector by taking the cosine similarity between them.",
"For the raw Timestamp feature, we first extract simple time/date features such as month, day and isWeekend. We then compute binary features that mark if the timestamp occurred near one of the special events mentioned before. Some of these events occur the same day every year, while others change (for example, the Super Bowl). In addition, many events are country dependent. The timestamp's event features are combined with the joke's event features to allow the model to capture if an event-related joke occurs at the right time of the year.",
"The LR classifier is trained on the processed features and one of the labels. The model's posterior probability is used to sort the candidates, which are chosen randomly from a pool of unheard jokes. Although useful (see Validation section), this model has several shortcomings. In particular, many of the used features require significant feature engineering and/or are country/language dependent, limiting the extensibility of the model."
],
[
"To overcome the LR-model's limitations, we propose the following model (see Figure FIGREF7). In the input layer, features are separated into context, item and user features. Unlike the LR-model, time and text features do not require extensive feature engineering. Instead, simple features (day, month and year) are extracted from the timestamp. After tokenization and stop-word removal, text features are passed through a pre-trained word embeding layer, and later, input into the joke encoder block.",
"The basis of the joke encoder is a modified transformer. Firstly, only the encoder is needed. Moreover, since studies suggest that humor is subjective and conditioned on the user's context BIBREF13, we add an additional sub-layer in the transformer encoder that performs attention over the user's features. This sub-layer, inserted between the two typical transformer sub-layers at certain depths of the network, allows the encoder to adapt the representations of the jokes to different user contexts. Thus, the same joke can be encoded differently depending on the user's features. In practice, this additional sub-layer works like the normal self-attention sub-layer, except it creates its query matrix Q from the sub-layer below, and its K and V matrices from the user features. As an alternative, we also test encoding the jokes using a pre-trained BERT model.",
"Regardless of the used encoder, we average the token representations to obtain a global encoding of the jokes. The same encoder is used to represent the item's (the joke to rank) and the user's (liked and disliked jokes) textual features through weight sharing, and the cosine similarity between both representations are computed. The processed features are then concatenated and passed through a final block of fully connected layers that contains the output layers. Since experiments determined (see Validation section) that both labeling strategies can improve the desired business metrics, instead of optimizing for only one of them, we take a multi-task learning approach. Thus, we have two softmax outputs.",
"Finally, we use a loss function that considers label uncertainty, class imbalance and the different labeling functions. We start from the traditional cross-entropy loss for one labelling function. We then apply uniform label smoothing BIBREF14, which converts the one-hot-encoded label vectors into smoothed label vectors towards $0.5$:",
"with $\\epsilon $ a hyper-parameter. Label smoothing provides a way of considering the uncertainty on the labels by encouraging the model to be less confident. We have also experimented with other alternatives, including specialized losses such as BIBREF15. However, they did not produce a significant increase in performance in our tests. To further model the possible uncertainty in the feedback, we apply sample weights calculated using an exponential decay function on the time difference between the current and the following training instance of the same customer:",
"where $w_i$ is the weight of sample $i$, $t_i$ is the time difference between instances $i$ and $i+1$ for the same user, and $a,b$ are hyper-parameters such that $a>0$ and $0<b<1$. The rationale behind these weights is the following. If for example, we consider labeling function 1, and a user asks for consecutive jokes, first within 10 seconds and later within 4.9 minutes, both instances are labeled as positive. However, we hypothesize that there is a lower chance that in the second case the user requested an additional joke because he liked the first one. In addition, class weights are applied to each sample to account for the natural class imbalance of the dataset. Finally, the total loss to be optimized is the weighted sum of the losses for each of the considered labeling functions:",
"where $w_{l}$ are manually set weights for each label and $\\mathcal {L}_{l}$ are the losses corresponding to each label, which include all the weights mentioned before."
],
[
"A two-step validation was conducted for English-speaking customers. An initial A/B testing for the LR model in a production setting was performed to compare the labelling strategies. A second offline comparison of the models was conducted on historical data and a selected labelling strategy. One month of data and a subset of the customers was used (approx. eighty thousand). The sampled dataset presents a fraction of positive labels of approximately 0.5 for reuse and 0.2 for one-day return. Importantly, since this evaluation is done on a subset of users, the dataset characteristic's do not necessarily represent real production traffic. The joke corpus in this dataset contains thousands of unique jokes of different categories (sci-fi, sports, etc) and types (puns, limerick, etc). The dataset was split timewise into training/validation/test sets, and hyperparameters were optimized to maximize the AUC-ROC on the validation set. As a benchmark, we also consider two additional methods: a non-personalized popularity model and one that follows BIBREF16, replacing the transformer joke encoder with a CNN network (the specialized loss and other characteristics of the DL model are kept).",
"Hyperparameters were optimized using grid-search for the LR-Model. Due to computational constraints, random search was instead used for the DL-Model. In both cases, hyperparameters are selected to optimize the AUC-ROC on the validation set. Table TABREF11 lists some of the considered hyperparameter values and ranges for both models. The actual optimal values are sample specific."
],
[
"Two treatment groups are considered, one per label. Users in the control group are presented jokes at random, without repetition. Several user-satisfaction metrics such as user interruption rate, reuse of this and other VVA skills, and number of active dialogs are monitored during the tests. The relative improvement/decline of these metrics is compared between the treatments and control, and between the treatments themselves. The statistical significance is measured when determining differences between the groups. Results show that the LR-based model consistently outperforms the heuristic method for both labeling strategies, significantly improving retention, dialogs and interruptions. These results suggest that models trained using either label can improve the VVA's joke experience."
],
[
"One-day return was selected for the offline evaluation because models trained on it have a better AUC-ROC, and both labeling strategies were successful in the online validation. All results are expressed as relative change with respect to the popularity model.",
"We start by evaluating the models using AUC-ROC. As seen in Table TABREF14, the transformer-based models, and in particular our custom architecture, outperform all other approaches. Similar conclusions can be reached regarding overall accuracy. However, given the class imbalance, accuracy is not necessarily the best metric to consider. In addition, to better understand the effect to the original transformer architecture, we present the performance of the model with and without the modified loss and special attention sub-layer (see Table TABREF14). Results suggest both modifications have a positive impact on the performance. Finally, to further evaluate the ranking capabilities of the proposed methods, we use top-1 accuracy. Additional positions in the ranking are not considered because only the top ranked joke is presented to the customer. Results show that the DL based models outperform the other systems, with a relative change in top-1 accuracy of 1.4 for DL-BERT and 0.43 for DL-T, compared with 0.14 for the LR method.",
"Results show that the proposed methods provide different compromises between accuracy, scalability and robustness. On one hand, the relatively good performance of the LR model with engineered features provides a strong baseline both in terms of accuracy and training/inference performance, at the cost of being difficult to extend to new countries and languages. On the other hand, DL based methods give a significant accuracy gain and require no feature engineering, which facilitates the expansion of the joke experience to new markets and languages. This comes at a cost of added complexity if deployed in production. In addition, given the size of the BERT model (340M parameters), real-time inference using DL-BERT becomes problematic due to latency constraints. In this regard, the DL-T model could be a good compromise since its complexity can be adapted, and it provides good overall accuracy."
],
[
"This paper describes systems to personalize a VVA's joke experience using NLP and deep-learning techniques that provide different compromises between accuracy, scalability and robustness. Implicit feedback signals are used to generate weak labels and provide supervision to the ranking models. Results on production data show that models trained on any of the considered labels present a positive real-world impact on user satisfaction, and that the deep learning approaches can potentially improve the joke skill with respect to the other considered methods. In the future, we would like to compare all methods in A/B testing, and to extend the models to other languages."
]
]
} | {
"question": [
"What evaluation metrics were used?",
"Where did the real production data come from?",
"What feedback labels are used?"
],
"question_id": [
"57e783f00f594e08e43a31939aedb235c9d5a102",
"9646fa1abbe3102a0364f84e0a55d107d45c97f0",
"29983f4bc8a5513a198755e474361deee93d4ab6"
],
"nlp_background": [
"five",
"five",
"five"
],
"topic_background": [
"research",
"research",
"research"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"humor",
"humor",
"humor"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"AUC-ROC"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"A two-step validation was conducted for English-speaking customers. An initial A/B testing for the LR model in a production setting was performed to compare the labelling strategies. A second offline comparison of the models was conducted on historical data and a selected labelling strategy. One month of data and a subset of the customers was used (approx. eighty thousand). The sampled dataset presents a fraction of positive labels of approximately 0.5 for reuse and 0.2 for one-day return. Importantly, since this evaluation is done on a subset of users, the dataset characteristic's do not necessarily represent real production traffic. The joke corpus in this dataset contains thousands of unique jokes of different categories (sci-fi, sports, etc) and types (puns, limerick, etc). The dataset was split timewise into training/validation/test sets, and hyperparameters were optimized to maximize the AUC-ROC on the validation set. As a benchmark, we also consider two additional methods: a non-personalized popularity model and one that follows BIBREF16, replacing the transformer joke encoder with a CNN network (the specialized loss and other characteristics of the DL model are kept)."
],
"highlighted_evidence": [
" The dataset was split timewise into training/validation/test sets, and hyperparameters were optimized to maximize the AUC-ROC on the validation set. "
]
}
],
"annotation_id": [
"12a2524828525ebc1d9f9aa17b135fd120615c24"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" jokes of different categories (sci-fi, sports, etc) and types (puns, limerick, etc)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"A two-step validation was conducted for English-speaking customers. An initial A/B testing for the LR model in a production setting was performed to compare the labelling strategies. A second offline comparison of the models was conducted on historical data and a selected labelling strategy. One month of data and a subset of the customers was used (approx. eighty thousand). The sampled dataset presents a fraction of positive labels of approximately 0.5 for reuse and 0.2 for one-day return. Importantly, since this evaluation is done on a subset of users, the dataset characteristic's do not necessarily represent real production traffic. The joke corpus in this dataset contains thousands of unique jokes of different categories (sci-fi, sports, etc) and types (puns, limerick, etc). The dataset was split timewise into training/validation/test sets, and hyperparameters were optimized to maximize the AUC-ROC on the validation set. As a benchmark, we also consider two additional methods: a non-personalized popularity model and one that follows BIBREF16, replacing the transformer joke encoder with a CNN network (the specialized loss and other characteristics of the DL model are kept)."
],
"highlighted_evidence": [
"The joke corpus in this dataset contains thousands of unique jokes of different categories (sci-fi, sports, etc) and types (puns, limerick, etc)."
]
}
],
"annotation_id": [
"01b29b400e870a62dbb3e21f71d0baf3617ba977"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"five-minute reuse and one-day return"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The proposed models use binary classifiers to perform point-wise ranking, and therefore require a labelled dataset. To generate it, we explore two implicit user-feedback labelling strategies: five-minute reuse and one-day return. Online A/B testing is used to determine if these labelling strategies are suited to optimize the desired user-satisfaction metrics, and offline data to evaluated and compared the system's performance."
],
"highlighted_evidence": [
"To generate it, we explore two implicit user-feedback labelling strategies: five-minute reuse and one-day return. "
]
}
],
"annotation_id": [
"7baeab31a2d74fe23ada80a5dd6c04e8d1f32345"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
} | {
"caption": [
"Table 1: Example of labelling strategies: five-minute reuse (label 1) and 1-day return (label 2)",
"Table 2: Examples of features within each category",
"Figure 1: Architecture of the transformer-based model",
"Table 3: Hyperparameter values tuned over, LR (top) and DL models (bottom)",
"Table 4: Relative change w.r.t popularity model of AUCROC and Overall Accuracy: transformer model (DL-T), BERT model (DL-BERT), transformer without special context-aware attention (DL-T-noAtt) and without both special attention and modified loss (DL-T-basic), CNN model (DL-CNN) and LR model (LR)."
],
"file": [
"2-Table1-1.png",
"2-Table2-1.png",
"3-Figure1-1.png",
"4-Table3-1.png",
"4-Table4-1.png"
]
} |
1911.11750 | A Measure of Similarity in Textual Data Using Spearman's Rank Correlation Coefficient | In the last decade, many diverse advances have occurred in the field of information extraction from data. Information extraction in its simplest form takes place in computing environments, where structured data can be extracted through a series of queries. The continuous expansion of quantities of data have therefore provided an opportunity for knowledge extraction (KE) from a textual document (TD). A typical problem of this kind is the extraction of common characteristics and knowledge from a group of TDs, with the possibility to group such similar TDs in a process known as clustering. In this paper we present a technique for such KE among a group of TDs related to the common characteristics and meaning of their content. Our technique is based on the Spearman's Rank Correlation Coefficient (SRCC), for which the conducted experiments have proven to be comprehensive measure to achieve a high-quality KE. | {
"section_name": [
"Introduction",
"Background",
"Background ::: Document Representation",
"Background ::: Measures of Similarity",
"Related Work",
"The Spearman's Rank Correlation Coefficient Similarity Measure",
"The Spearman's Rank Correlation Coefficient Similarity Measure ::: Spearman's Rank Correlation Coefficient",
"The Spearman's Rank Correlation Coefficient Similarity Measure ::: Spearman's Rank Correlation Coefficient ::: An Illustration of the Ranking TF-IDF Vectors",
"Experiments",
"Experiments ::: Comparison Between Similarity Measures",
"Experiments ::: Non-linearity of Documents",
"Conclusion and Future Work"
],
"paragraphs": [
[
"Over the past few years, the term big data has become an important key point for research into data mining and information retrieval. Through the years, the quantity of data managed across enterprises has evolved from a simple and imperceptible task to an extent to which it has become the central performance improvement problem. In other words, it evolved to be the next frontier for innovation, competition and productivity BIBREF0. Extracting knowledge from data is now a very competitive environment. Many companies process vast amounts of customer/user data in order to improve the quality of experience (QoE) of their customers. For instance, a typical use-case scenario would be a book seller that performs an automatic extraction of the content of the books a customer has bought, and subsequently extracts knowledge of what customers prefer to read. The knowledge extracted could then be used to recommend other books. Book recommending systems are typical examples where data mining techniques should be considered as the primary tool for making future decisions BIBREF1.",
"KE from TDs is an essential field of research in data mining and it certainly requires techniques that are reliable and accurate in order to neutralize (or even eliminate) uncertainty in future decisions. Grouping TDs based on their content and mutual key information is referred to as clustering. Clustering is mostly performed with respect to a measure of similarity between TDs, which must be represented as vectors in a vector space beforehand BIBREF2. News aggregation engines can be considered as a typical representative where such techniques are extensively applied as a sub-field of natural language processing (NLP).",
"In this paper we present a new technique for measuring similarity between TDs, represented in a vector space, based on SRCC - \"a statistical measure of association between two things\" BIBREF3, which in this case things refer to TDs. The mathematical properties of SRCC (such as the ability to detect nonlinear correlation) make it compelling to be researched into. Our motivation is to provide a new technique of improving the quality of KE based on the well-known association measure SRCC, as opposed to other well-known TD similarity measures.",
"The paper is organized as follows: Section SECREF2 gives a brief overview of the vector space representation of a TD and the corresponding similarity measures, in Section SECREF3 we address conducted research of the role of SRCC in data mining and trend prediction. Section SECREF4 is a detailed description of the proposed technique, and later, in Section SECREF5 we present clustering and classification experiments conducted on several sets of TDs, while Section SECREF6 summarizes our research and contribution to the broad area of statistical text analysis."
],
[
"In this section we provide a brief background of vector space representation of TDs and existing similarity measures that have been widely used in statistical text analysis. To begin with, we consider the representation of documents."
],
[
"A document $d$ can be defined as a finite sequence of terms (independent textual entities within a document, for example, words), namely $d=(t_1,t_2,\\dots ,t_n)$. A general idea is to associate weight to each term $t_i$ within $d$, such that",
"which has proven superior in prior extensive research BIBREF4. The most common weight measure is Term Frequency - Inverse Document Frequency (TF-IDF). TF is the frequency of a term within a single document, and IDF represents the importance, or uniqueness of a term within a set of documents $D=\\lbrace d_1, d_2, \\dots ,d_m\\rbrace $. TF-IDF is defined as follows:",
"where",
"such that $f$ is the number of occurrences of $t$ in $d$ and $\\log $ is used to avoid very small values close to zero.",
"Having these measures defined, it becomes obvious that each $w_i$, for $i=1,\\dots ,n$ is assigned the TF-IDF value of the corresponding term. It turns out that each document is represented as a vector of TF-IDF weights within a vector space model (VSM) with its properties BIBREF5."
],
[
"Different ways of computing the similarity of two vector exist. There are two main approaches in similarity computation:",
"Deterministic - similarity measures exploiting algebraic properties of vectors and their geometrical interpretation. These include, for instance, cosine similarity (CS), Jaccard coefficients (for binary representations), etc.",
"Stochastic - similarity measures in which uncertainty is taken into account. These include, for instance, statistics such as Pearson's Correlation Coefficient (PCC) BIBREF6.",
"Let $\\mathbf {u}$ and $\\mathbf {v}$ be the vector representations of two documents $d_1$ and $d_2$. Cosine similarity simply measures $cos\\theta $, where $\\theta $ is the angle between $\\mathbf {u}$ and $\\mathbf {v}$",
"(cosine similarity)",
"(PCC)",
"where",
"All of the above measures are widely used and have proven efficient, but an important aspect is the lack of importance of the order of terms in textual data. It is easy for one to conclude that, two documents containing a single sentence each, but in a reverse order of terms, most deterministic methods fail to express that these are actually very similar. On the other hand, PCC detects only linear correlation, which constraints the diversity present in textual data. In the following section, we study relevant research in solving this problem, and then in Sections SECREF4 and SECREF5 we present our solution and results."
],
[
"A significant number of similarity measures have been proposed and this topic has been thoroughly elaborated. Its main application is considered to be clustering and classification of textual data organized in TDs. In this section, we provide an overview of relevant research on this topic, to which we can later compare our proposed technique for computing vector similarity.",
"KE (also referred to as knowledge discovery) techniques are used to extract information from unstructured data, which can be subsequently used for applying supervised or unsupervised learning techniques, such as clustering and classification of the content BIBREF7. Text clustering should address several challenges such as vast amounts of data, very high dimensionality of more than 10,000 terms (dimensions), and most importantly - an understandable description of the clusters BIBREF8, which essentially implies the demand for high quality of extracted information.",
"Regarding high quality KE and information accuracy, much effort has been put into improving similarity measurements. An improvement based on linear algebra, known as Singular Value Decomposition (SVD), is oriented towards word similarity, but instead, its main application is document similarity BIBREF9. Alluring is the fact that this measure takes the advantage of synonym recognition and has been used to achieve human-level scores on multiple-choice synonym questions from the Test of English as a Foreign Language (TOEFL) in a technique known as Latent Semantic Analysis (LSA) BIBREF10 BIBREF5.",
"Other semantic term similarity measures have been also proposed, based on information exclusively derived from large corpora of words, such as Pointwise Mutual Information (PMI), which has been reported to have achieved a large degree of correctness in the synonym questions in the TOEFL and SAT tests BIBREF11.",
"Moreover, normalized knowledge-based measures, such as Leacock & Chodrow BIBREF12, Lesk (\"how to tell a pine cone from an ice-cream cone\" BIBREF13, or measures for the depth of two concepts (preferably vebs) in the Word-Net taxonomy BIBREF14 have experimentally proven to be efficient. Their accuracy converges to approximately 69%, Leacock & Chodrow and Lesk have showed the highest precision, and having them combined turns out to be the approximately optimal solution BIBREF11."
],
[
"The main idea behind our proposed technique is to introduce uncertainty in the calculations of the similarity between TDs represented in a vector space model, based on the nonlinear properties of SRCC. Unlike PCC, which is only able to detect linear correlation, SRCC's nonlinear ability provides a convenient way of taking different ordering of terms into account."
],
[
"The Spreaman's Rank Correlation Coefficient BIBREF3, denoted $\\rho $, has a from which is very similar to PCC. Namely, for $n$ raw scores $U_i, V_i$ for $i=1,\\dots ,n$ denoting TF-IDF values for two document vectors $\\mathbf {U}, \\mathbf {V}$,",
"where $u_i$ and $v_i$ are the corresponding ranks of $U_i$ and $V_i$, for $i=0,\\dots ,n-1$. A metric to assign the ranks of each of the TF-IDF values has to be determined beforehand. Each $U_i$ is assigned a rank value $u_i$, such that $u_i=0,1,\\dots ,n-1$. It is important to note that the metric by which the TF-IDF values are ranked is essentially their sorting criteria. A convenient way of determining this criteria when dealing with TF-IDF values, which emphasize the importance of a term within a TD set, is to sort these values in an ascending order. Thus, the largest (or most important) TF-IDF value within a TD vector is assigned the rank value of $n-1$, and the least important is assigned a value of 0."
],
[
"Consider two TDs $d_1$ and $d_2$, each containing a single sentence.",
"Document 1: John had asked Mary to marry him before she left.",
"Document 2: Before she left, Mary was asked by John to be his wife.",
"Now consider these sentences lemmatized:",
"Document 1: John have ask Mary marry before leave.",
"Document 2: Before leave Mary ask John his wife.",
"Let us now represent $d_1$ and $d_2$ as TF-IDF vectors for the vocabulary in our small corpus.",
"The results in Table TABREF7 show that SRCC performs much better in knowledge extraction. The two documents' contents contain the same idea expressed by terms in a different order that John had asked Mary to marry him before she left. It is obvious that cosine similarity cannot recognize this association, but SRCC has successfully recognized it and produced a similarity value of -0.285714.",
"SRCC is essentially conducive to semantic similarity. Rising the importance of a term in a TD will eventually rise its importance in another TD. But if the two TDs are of different size, the terms' importance values will also differ, by which a nonlinear association will emerge. This association will not be recognized by PCC at all (as it only detects linear association), but SRCC will definitely catch this detail and produce the desirable similarity value. The idea is to use SRCC to catch such terms which drive the semantic context of a TD, which will follow a nonlinear and lie on a polynomial curve, and not on the line $x=y$.",
"In our approach, we use a non-standard measure of similarity in textual data with simple and common frequency values, such as TF-IDF, in contrast to the statement that simple frequencies are not enough for high-quality knowledge extraction BIBREF5. In the next section, we will present our experiments and discuss the results we have obtained."
],
[
"In order to test our proposed approach, we have conducted a series of experiments. In this section, we briefly discuss the outcome and provide a clear view of whether our approach is suitable for knowledge extraction from textual data in a semantic context.",
"We have used a dataset of 14 TDs to conduct our experiments. There are several subjects on which their content is based: (aliens, stories, law, news) BIBREF15."
],
[
"In this part, we have compared the similarity values produced by each of the similarity measures CS, SRCC and PCC. We have picked a few notable results and they are summarized in Table TABREF9 below.",
"In Table TABREF9 that SRCC mostly differs from CS and PCC, which also differ in some cases.For instance, $d_1$ refers to leadership in the nineties, while $d_5$ refers to the family and medical lead act of 1993. We have empirically observed that the general topics discussed in these two textual documents are very different. Namely, discusses different frameworks for leadership empowerment, while $d_5$ discusses medical treatment and self-care of employees. We have observed that the term employee is the only connection between $d_1$ and $d_5$. The similarity value of CS of 0.36 is very unreal in this case, while PCC (0.05), and especially SRCC (0.0018) provide a much more realistic view of the semantic knowledge aggregated in these documents. Another example are $d_8$ and $d_9$. The contents of these documents are very straightforward and very similar, because they discuss aliens seen by Boeing-747 pilots and $d_9$ discusses angels that were considered to be aliens. It is obvious that SRCC is able to detect this association as good as CS and PCC which are very good in such straightforward cases.",
"We have observed that SRCC does not perform worse than any other of these similarity measures. It does not always produce the most suitable similarity value, but it indeed does perform at least equally good as other measures. The values in Table TABREF9 are very small, and suggest that SRCC performs well in extracting tiny associations in such cases. It is mostly a few times larger than CS and PCC when there actually exist associations between the documents.",
"These results are visually summarized in Figure FIGREF10. The two above-described examples can be clearly seen as standing out."
],
[
"In this part we will briefly present the nonlinear association between some of the TDs we have used in our experiments. Our purpose is to point out that $(d_6,d_{10})$ and $(d_7,d_{12})$ are the pairs where SRCC is the most appropriate measure for the observed content, and as such, it is able to detect the nonlinear association between them. This can be seen in Figure FIGREF12 below. The straightforward case of $d_8$ and $d_9$ also stands out here (SRCC can also detect it very well).",
"The obtained results showed that our technique shows good performance on similarity computing, although it is not a perfect measure. But, it sure comes close to convenient and widely used similarity measures such as CS and PCC. The next section provides a conclusion of our research and suggestions for further work."
],
[
"In this paper we have presented a non-standard technique for computing the similarity between TF-IDF vectors. We have propagated our idea and contributed a portion of new knowledge in this field of text analysis. We have proposed a technique that is widely used in similar fields, and our goal is to provide starting information to other researches in this area. We consider our observations promising and they should be extensively researched.",
"Our experiments have proved that our technique should be a subject for further research. Our future work will concentrate on the implementation of machine learning techniques, such as clustering and subsequent classification of textual data. We expect an information of good quality to be extracted. To summarize, the rapidly emerging area of big data and information retrieval is where our technique should reside and where it should be applied."
]
]
} | {
"question": [
"What representations for textual documents do they use?",
"Which dataset(s) do they use?",
"How do they evaluate knowledge extraction performance?"
],
"question_id": [
"6c0f97807cd83a94a4d26040286c6f89c4a0f8e0",
"13ca4bf76565564c8ec3238c0cbfacb0b41e14d2",
"70797f66d96aa163a3bee2be30a328ba61c40a18"
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"finite sequence of terms"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"A document $d$ can be defined as a finite sequence of terms (independent textual entities within a document, for example, words), namely $d=(t_1,t_2,\\dots ,t_n)$. A general idea is to associate weight to each term $t_i$ within $d$, such that"
],
"highlighted_evidence": [
"A document $d$ can be defined as a finite sequence of terms (independent textual entities within a document, for example, words), namely $d=(t_1,t_2,\\dots ,t_n)$."
]
}
],
"annotation_id": [
"01eeb2bc2d79bbc55480256d137856fcd01f27ab"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"14 TDs",
"BIBREF15"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We have used a dataset of 14 TDs to conduct our experiments. There are several subjects on which their content is based: (aliens, stories, law, news) BIBREF15."
],
"highlighted_evidence": [
"We have used a dataset of 14 TDs to conduct our experiments."
]
}
],
"annotation_id": [
"e3fe289e3e9f5d6d92deff1bfd4b84e4b4a9eb01"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"SRCC"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The results in Table TABREF7 show that SRCC performs much better in knowledge extraction. The two documents' contents contain the same idea expressed by terms in a different order that John had asked Mary to marry him before she left. It is obvious that cosine similarity cannot recognize this association, but SRCC has successfully recognized it and produced a similarity value of -0.285714."
],
"highlighted_evidence": [
"The results in Table TABREF7 show that SRCC performs much better in knowledge extraction. The two documents' contents contain the same idea expressed by terms in a different order that John had asked Mary to marry him before she left. It is obvious that cosine similarity cannot recognize this association, but SRCC has successfully recognized it and produced a similarity value of -0.285714."
]
}
],
"annotation_id": [
"5bbc00a274be4384a9623f306f95315328fe84e5"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"TABLE II. A COMPARISON BETWEEN THE MEASURES CS, SRCC, PCC",
"Fig. 1. A visual comparison of similarities produced by CS, SRCC and PCC",
"Fig. 2. The association between documents"
],
"file": [
"3-TableII-1.png",
"4-Figure1-1.png",
"4-Figure2-1.png"
]
} |
1911.03894 | CamemBERT: a Tasty French Language Model | Pretrained language models are now ubiquitous in Natural Language Processing. Despite their success, most available models have either been trained on English data or on the concatenation of data in multiple languages. This makes practical use of such models—in all languages except English—very limited. Aiming to address this issue for French, we release CamemBERT, a French version of the Bi-directional Encoders for Transformers (BERT). We measure the performance of CamemBERT compared to multilingual models in multiple downstream tasks, namely part-of-speech tagging, dependency parsing, named-entity recognition, and natural language inference. CamemBERT improves the state of the art for most of the tasks considered. We release the pretrained model for CamemBERT hoping to foster research and downstream applications for French NLP. | {
"section_name": [
"Introduction",
"Related Work ::: From non-contextual to contextual word embeddings",
"Related Work ::: Non-contextual word embeddings for languages other than English",
"Related Work ::: Contextualised models for languages other than English",
"CamemBERT",
"CamemBERT ::: Architecture",
"CamemBERT ::: Pretraining objective",
"CamemBERT ::: Optimisation",
"CamemBERT ::: Segmentation into subword units",
"CamemBERT ::: Pretraining data",
"Evaluation ::: Part-of-speech tagging and dependency parsing",
"Evaluation ::: Part-of-speech tagging and dependency parsing ::: Baselines",
"Evaluation ::: Named Entity Recognition",
"Evaluation ::: Named Entity Recognition ::: Baselines",
"Evaluation ::: Natural Language Inference",
"Evaluation ::: Natural Language Inference ::: Baselines",
"Experiments",
"Experiments ::: Experimental Setup ::: Pretraining",
"Experiments ::: Experimental Setup ::: Fine-tuning",
"Experiments ::: Results ::: Part-of-Speech tagging and dependency parsing",
"Experiments ::: Results ::: Natural Language Inference: XNLI",
"Experiments ::: Results ::: Named-Entity Recognition",
"Experiments ::: Discussion",
"Conclusion",
"Acknowledgments",
"Appendix ::: Impact of Whole-Word Masking"
],
"paragraphs": [
[
"Pretrained word representations have a long history in Natural Language Processing (NLP), from non-neural methods BIBREF0, BIBREF1, BIBREF2 to neural word embeddings BIBREF3, BIBREF4 and to contextualised representations BIBREF5, BIBREF6. Approaches shifted more recently from using these representations as an input to task-specific architectures to replacing these architectures with large pretrained language models. These models are then fine-tuned to the task at hand with large improvements in performance over a wide range of tasks BIBREF7, BIBREF8, BIBREF9, BIBREF10.",
"These transfer learning methods exhibit clear advantages over more traditional task-specific approaches, probably the most important being that they can be trained in an unsupervised manner. They nevertheless come with implementation challenges, namely the amount of data and computational resources needed for pretraining that can reach hundreds of gigabytes of uncompressed text and require hundreds of GPUs BIBREF11, BIBREF9. The latest transformer architecture has gone uses as much as 750GB of plain text and 1024 TPU v3 for pretraining BIBREF10. This has limited the availability of these state-of-the-art models to the English language, at least in the monolingual setting. Even though multilingual models give remarkable results, they are often larger and their results still lag behind their monolingual counterparts BIBREF12. This is particularly inconvenient as it hinders their practical use in NLP systems as well as the investigation of their language modeling capacity, something that remains to be investigated in the case of, for instance, morphologically rich languages.",
"We take advantage of the newly available multilingual corpus OSCAR BIBREF13 and train a monolingual language model for French using the RoBERTa architecture. We pretrain the model - which we dub CamemBERT- and evaluate it in four different downstream tasks for French: part-of-speech (POS) tagging, dependency parsing, named entity recognition (NER) and natural language inference (NLI). CamemBERT improves the state of the art for most tasks over previous monolingual and multilingual approaches, which confirms the effectiveness of large pretrained language models for French.",
"We summarise our contributions as follows:",
"We train a monolingual BERT model on the French language using recent large-scale corpora.",
"We evaluate our model on four downstream tasks (POS tagging, dependency parsing, NER and natural language inference (NLI)), achieving state-of-the-art results in most tasks, confirming the effectiveness of large BERT-based models for French.",
"We release our model in a user-friendly format for popular open-source libraries so that it can serve as a strong baseline for future research and be useful for French NLP practitioners."
],
[
"The first neural word vector representations were non-contextualised word embeddings, most notably word2vec BIBREF3, GloVe BIBREF4 and fastText BIBREF14, which were designed to be used as input to task-specific neural architectures. Contextualised word representations such as ELMo BIBREF5 and flair BIBREF6, improved the expressivity of word embeddings by taking context into account. They improved the performance of downstream tasks when they replaced traditional word representations. This paved the way towards larger contextualised models that replaced downstream architectures in most tasks. These approaches, trained with language modeling objectives, range from LSTM-based architectures such as ULMFiT BIBREF15 to the successful transformer-based architectures such as GPT2 BIBREF8, BERT BIBREF7, RoBERTa BIBREF9 and more recently ALBERT BIBREF16 and T5 BIBREF10."
],
[
"Since the introduction of word2vec BIBREF3, many attempts have been made to create monolingual models for a wide range of languages. For non-contextual word embeddings, the first two attempts were by BIBREF17 and BIBREF18 who created word embeddings for a large number of languages using Wikipedia. Later BIBREF19 trained fastText word embeddings for 157 languages using Common Crawl and showed that using crawled data significantly increased the performance of the embeddings relatively to those trained only on Wikipedia."
],
[
"Following the success of large pretrained language models, they were extended to the multilingual setting with multilingual BERT , a single multilingual model for 104 different languages trained on Wikipedia data, and later XLM BIBREF12, which greatly improved unsupervised machine translation. A few monolingual models have been released: ELMo models for Japanese, Portuguese, German and Basque and BERT for Simplified and Traditional Chinese and German.",
"However, to the best of our knowledge, no particular effort has been made toward training models for languages other than English, at a scale similar to the latest English models (e.g. RoBERTa trained on more than 100GB of data)."
],
[
"Our approach is based on RoBERTa BIBREF9, which replicates and improves the initial BERT by identifying key hyper-parameters for more robust performance.",
"In this section, we describe the architecture, training objective, optimisation setup and pretraining data that was used for CamemBERT.",
"CamemBERT differs from RoBERTa mainly with the addition of whole-word masking and the usage of SentencePiece tokenisation BIBREF20."
],
[
"Similar to RoBERTa and BERT, CamemBERT is a multi-layer bidirectional Transformer BIBREF21. Given the widespread usage of Transformers, we do not describe them in detail here and refer the reader to BIBREF21. CamemBERT uses the original BERT $_{\\small \\textsc {BASE}}$ configuration: 12 layers, 768 hidden dimensions, 12 attention heads, which amounts to 110M parameters."
],
[
"We train our model on the Masked Language Modeling (MLM) task. Given an input text sequence composed of $N$ tokens $x_1, ..., x_N$, we select $15\\%$ of tokens for possible replacement. Among those selected tokens, 80% are replaced with the special $<$mask$>$ token, 10% are left unchanged and 10% are replaced by a random token. The model is then trained to predict the initial masked tokens using cross-entropy loss.",
"Following RoBERTa we dynamically mask tokens instead of fixing them statically for the whole dataset during preprocessing. This improves variability and makes the model more robust when training for multiple epochs.",
"Since we segment the input sentence into subwords using SentencePiece, the input tokens to the models can be subwords. An upgraded version of BERT and BIBREF22 have shown that masking whole words instead of individual subwords leads to improved performance. Whole-word masking (WWM) makes the training task more difficult because the model has to predict a whole word instead of predicting only part of the word given the rest. As a result, we used WWM for CamemBERT by first randomly sampling 15% of the words in the sequence and then considering all subword tokens in each of these 15% words for candidate replacement. This amounts to a proportion of selected tokens that is close to the original 15%. These tokens are then either replaced by $<$mask$>$ tokens (80%), left unchanged (10%) or replaced by a random token.",
"Subsequent work has shown that the next sentence prediction task (NSP) originally used in BERT does not improve downstream task performance BIBREF12, BIBREF9, we do not use NSP as a consequence."
],
[
"Following BIBREF9, we optimise the model using Adam BIBREF23 ($\\beta _1 = 0.9$, $\\beta _2 = 0.98$) for 100k steps. We use large batch sizes of 8192 sequences. Each sequence contains at most 512 tokens. We enforce each sequence to only contain complete sentences. Additionally, we used the DOC-SENTENCES scenario from BIBREF9, consisting of not mixing multiple documents in the same sequence, which showed slightly better results."
],
[
"We segment the input text into subword units using SentencePiece BIBREF20. SentencePiece is an extension of Byte-Pair encoding (BPE) BIBREF24 and WordPiece BIBREF25 that does not require pre-tokenisation (at the word or token level), thus removing the need for language-specific tokenisers. We use a vocabulary size of 32k subword tokens. These are learned on $10^7$ sentences sampled from the pretraining dataset. We do not use subword regularisation (i.e. sampling from multiple possible segmentations) in our implementation for simplicity."
],
[
"Pretrained language models can be significantly improved by using more data BIBREF9, BIBREF10. Therefore we used French text extracted from Common Crawl, in particular, we use OSCAR BIBREF13 a pre-classified and pre-filtered version of the November 2018 Common Craw snapshot.",
"OSCAR is a set of monolingual corpora extracted from Common Crawl, specifically from the plain text WET format distributed by Common Crawl, which removes all HTML tags and converts all text encodings to UTF-8. OSCAR follows the same approach as BIBREF19 by using a language classification model based on the fastText linear classifier BIBREF26, BIBREF27 pretrained on Wikipedia, Tatoeba and SETimes, which supports 176 different languages.",
"OSCAR performs a deduplication step after language classification and without introducing a specialised filtering scheme, other than only keeping paragraphs containing 100 or more UTF-8 encoded characters, making OSCAR quite close to the original Crawled data.",
"We use the unshuffled version of the French OSCAR corpus, which amounts to 138GB of uncompressed text and 32.7B SentencePiece tokens."
],
[
"We fist evaluate CamemBERT on the two downstream tasks of part-of-speech (POS) tagging and dependency parsing. POS tagging is a low-level syntactic task, which consists in assigning to each word its corresponding grammatical category. Dependency parsing consists in predicting the labeled syntactic tree capturing the syntactic relations between words.",
"We run our experiments using the Universal Dependencies (UD) paradigm and its corresponding UD POS tag set BIBREF28 and UD treebank collection version 2.2 BIBREF29, which was used for the CoNLL 2018 shared task. We perform our work on the four freely available French UD treebanks in UD v2.2: GSD, Sequoia, Spoken, and ParTUT.",
"GSD BIBREF30 is the second-largest treebank available for French after the FTB (described in subsection SECREF25), it contains data from blogs, news articles, reviews, and Wikipedia. The Sequoia treebank BIBREF31, BIBREF32 comprises more than 3000 sentences, from the French Europarl, the regional newspaper L’Est Républicain, the French Wikipedia and documents from the European Medicines Agency. Spoken is a corpus converted automatically from the Rhapsodie treebank BIBREF33, BIBREF34 with manual corrections. It consists of 57 sound samples of spoken French with orthographic transcription and phonetic transcription aligned with sound (word boundaries, syllables, and phonemes), syntactic and prosodic annotations. Finally, ParTUT is a conversion of a multilingual parallel treebank developed at the University of Turin, and consisting of a variety of text genres, including talks, legal texts, and Wikipedia articles, among others; ParTUT data is derived from the already-existing parallel treebank Par(allel)TUT BIBREF35 . Table TABREF23 contains a summary comparing the sizes of the treebanks.",
"We evaluate the performance of our models using the standard UPOS accuracy for POS tagging, and Unlabeled Attachment Score (UAS) and Labeled Attachment Score (LAS) for dependency parsing. We assume gold tokenisation and gold word segmentation as provided in the UD treebanks."
],
[
"To demonstrate the value of building a dedicated version of BERT for French, we first compare CamemBERT to the multilingual cased version of BERT (designated as mBERT). We then compare our models to UDify BIBREF36. UDify is a multitask and multilingual model based on mBERT that is near state-of-the-art on all UD languages including French for both POS tagging and dependency parsing.",
"It is relevant to compare CamemBERT to UDify on those tasks because UDify is the work that pushed the furthest the performance in fine-tuning end-to-end a BERT-based model on downstream POS tagging and dependency parsing. Finally, we compare our model to UDPipe Future BIBREF37, a model ranked 3rd in dependency parsing and 6th in POS tagging during the CoNLL 2018 shared task BIBREF38. UDPipe Future provides us a strong baseline that does not make use of any pretrained contextual embedding.",
"We will compare to the more recent cross-lingual language model XLM BIBREF12, as well as the state-of-the-art CoNLL 2018 shared task results with predicted tokenisation and segmentation in an updated version of the paper."
],
[
"Named Entity Recognition (NER) is a sequence labeling task that consists in predicting which words refer to real-world objects, such as people, locations, artifacts and organisations. We use the French Treebank (FTB) BIBREF39 in its 2008 version introduced by cc-clustering:09short and with NER annotations by sagot2012annotation. The NER-annotated FTB contains more than 12k sentences and more than 350k tokens extracted from articles of the newspaper Le Monde published between 1989 and 1995. In total, it contains 11,636 entity mentions distributed among 7 different types of entities, namely: 2025 mentions of “Person”, 3761 of “Location”, 2382 of “Organisation”, 3357 of “Company”, 67 of “Product”, 15 of “POI” (Point of Interest) and 29 of “Fictional Character”.",
"A large proportion of the entity mentions in the treebank are multi-word entities. For NER we therefore report the 3 metrics that are commonly used to evaluate models: precision, recall, and F1 score. Here precision measures the percentage of entities found by the system that are correctly tagged, recall measures the percentage of named entities present in the corpus that are found and the F1 score combines both precision and recall measures giving a general idea of a model's performance."
],
[
"Most of the advances in NER haven been achieved on English, particularly focusing on the CoNLL 2003 BIBREF40 and the Ontonotes v5 BIBREF41, BIBREF42 English corpora. NER is a task that was traditionally tackled using Conditional Random Fields (CRF) BIBREF43 which are quite suited for NER; CRFs were later used as decoding layers for Bi-LSTM architectures BIBREF44, BIBREF45 showing considerable improvements over CRFs alone. These Bi-LSTM-CRF architectures were later enhanced with contextualised word embeddings which yet again brought major improvements to the task BIBREF5, BIBREF6. Finally, large pretrained architectures settled the current state of the art showing a small yet important improvement over previous NER-specific architectures BIBREF7, BIBREF46.",
"In non-English NER the CoNLL 2002 shared task included NER corpora for Spanish and Dutch corpora BIBREF47 while the CoNLL 2003 included a German corpus BIBREF40. Here the recent efforts of BIBREF48 settled the state of the art for Spanish and Dutch, while BIBREF6 did it for German.",
"In French, no extensive work has been done due to the limited availability of NER corpora. We compare our model with the strong baselines settled by BIBREF49, who trained both CRF and BiLSTM-CRF architectures on the FTB and enhanced them using heuristics and pretrained word embeddings."
],
[
"We also evaluate our model on the Natural Language Inference (NLI) task, using the French part of the XNLI dataset BIBREF50. NLI consists in predicting whether a hypothesis sentence is entailed, neutral or contradicts a premise sentence.",
"The XNLI dataset is the extension of the Multi-Genre NLI (MultiNLI) corpus BIBREF51 to 15 languages by translating the validation and test sets manually into each of those languages. The English training set is also machine translated for all languages. The dataset is composed of 122k train, 2490 valid and 5010 test examples. As usual, NLI performance is evaluated using accuracy.",
"To evaluate a model on a language other than English (such as French), we consider the two following settings:",
"TRANSLATE-TEST: The French test set is machine translated into English, and then used with an English classification model. This setting provides a reasonable, although imperfect, way to circumvent the fact that no such data set exists for French, and results in very strong baseline scores.",
"TRANSLATE-TRAIN: The French model is fine-tuned on the machine-translated English training set and then evaluated on the French test set. This is the setting that we used for CamemBERT."
],
[
"For the TRANSLATE-TEST setting, we report results of the English RoBERTa to act as a reference.",
"In the TRANSLATE-TRAIN setting, we report the best scores from previous literature along with ours. BiLSTM-max is the best model in the original XNLI paper, mBERT which has been reported in French in BIBREF52 and XLM (MLM+TLM) is the best-presented model from BIBREF50."
],
[
"In this section, we measure the performance of CamemBERT by evaluating it on the four aforementioned tasks: POS tagging, dependency parsing, NER and NLI."
],
[
"We use the RoBERTa implementation in the fairseq library BIBREF53. Our learning rate is warmed up for 10k steps up to a peak value of $0.0007$ instead of the original $0.0001$ given our large batch size (8192). The learning rate fades to zero with polynomial decay. We pretrain our model on 256 Nvidia V100 GPUs (32GB each) for 100k steps during 17h."
],
[
"For each task, we append the relevant predictive layer on top of CamemBERT's Transformer architecture. Following the work done on BERT BIBREF7, for sequence tagging and sequence labeling we append a linear layer respectively to the $<$s$>$ special token and to the first subword token of each word. For dependency parsing, we plug a bi-affine graph predictor head as inspired by BIBREF54 following the work done on multilingual parsing with BERT by BIBREF36. We refer the reader to these two articles for more details on this module.",
"We fine-tune independently CamemBERT for each task and each dataset. We optimise the model using the Adam optimiser BIBREF23 with a fixed learning rate. We run a grid search on a combination of learning rates and batch sizes. We select the best model on the validation set out of the 30 first epochs.",
"Although this might push the performances even further, for all tasks except NLI, we don't apply any regularisation techniques such as weight decay, learning rate warm-up or discriminative fine-tuning. We show that fine-tuning CamemBERT in a straight-forward manner leads to state-of-the-art results on most tasks and outperforms the existing BERT-based models in most cases.",
"The POS tagging, dependency parsing, and NER experiments are run using hugging face's Transformer library extended to support CamemBERT and dependency parsing BIBREF55. The NLI experiments use the fairseq library following the RoBERTa implementation."
],
[
"For POS tagging and dependency parsing, we compare CamemBERT to three other near state-of-the-art models in Table TABREF32. CamemBERT outperforms UDPipe Future by a large margin for all treebanks and all metrics. Despite a much simpler optimisation process, CamemBERT beats UDify performances on all the available French treebanks.",
"CamemBERT also demonstrates higher performances than mBERT on those tasks. We observe a larger error reduction for parsing than for tagging. For POS tagging, we observe error reductions of respectively 0.71% for GSD, 0.81% for Sequoia, 0.7% for Spoken and 0.28% for ParTUT. For parsing, we observe error reductions in LAS of 2.96% for GSD, 3.33% for Sequoia, 1.70% for Spoken and 1.65% for ParTUT."
],
[
"On the XNLI benchmark, CamemBERT obtains improved performance over multilingual language models on the TRANSLATE-TRAIN setting (81.2 vs. 80.2 for XLM) while using less than half the parameters (110M vs. 250M). However, its performance still lags behind models trained on the original English training set in the TRANSLATE-TEST setting, 81.2 vs. 82.91 for RoBERTa. It should be noted that CamemBERT uses far fewer parameters than RoBERTa (110M vs. 355M parameters)."
],
[
"For named entity recognition, our experiments show that CamemBERT achieves a slightly better precision than the traditional CRF-based SEM architectures described above in Section SECREF25 (CRF and Bi-LSTM+CRF), but shows a dramatic improvement in finding entity mentions, raising the recall score by 3.5 points. Both improvements result in a 2.36 point increase in the F1 score with respect to the best SEM architecture (BiLSTM-CRF), giving CamemBERT the state of the art for NER on the FTB. One other important finding is the results obtained by mBERT. Previous work with this model showed increased performance in NER for German, Dutch and Spanish when mBERT is used as contextualised word embedding for an NER-specific model BIBREF48, but our results suggest that the multilingual setting in which mBERT was trained is simply not enough to use it alone and fine-tune it for French NER, as it shows worse performance than even simple CRF models, suggesting that monolingual models could be better at NER."
],
[
"CamemBERT displays improved performance compared to prior work for the 4 downstream tasks considered. This confirms the hypothesis that pretrained language models can be effectively fine-tuned for various downstream tasks, as observed for English in previous work. Moreover, our results also show that dedicated monolingual models still outperform multilingual ones. We explain this point in two ways. First, the scale of data is possibly essential to the performance of CamemBERT. Indeed, we use 138GB of uncompressed text vs. 57GB for mBERT. Second, with more data comes more diversity in the pretraining distribution. Reaching state-of-the-art performances on 4 different tasks and 6 different datasets requires robust pretrained models. Our results suggest that the variability in the downstream tasks and datasets considered is handled more efficiently by a general language model than by Wikipedia-pretrained models such as mBERT."
],
[
"CamemBERT improves the state of the art for multiple downstream tasks in French. It is also lighter than other BERT-based approaches such as mBERT or XLM. By releasing our model, we hope that it can serve as a strong baseline for future research in French NLP, and expect our experiments to be reproduced in many other languages. We will publish an updated version in the near future where we will explore and release models trained for longer, with additional downstream tasks, baselines (e.g. XLM) and analysis, we will also train additional models with potentially cleaner corpora such as CCNet BIBREF56 for more accurate performance evaluation and more complete ablation."
],
[
"This work was partly funded by three French National grants from the Agence Nationale de la Recherche, namely projects PARSITI (ANR-16-CE33-0021), SoSweet (ANR-15-CE38-0011) and BASNUM (ANR-18-CE38-0003), as well as by the last author's chair in the PRAIRIE institute."
],
[
"We analyze the addition of whole-word masking on the downstream performance of CamemBERT. As reported for English on other downstream tasks, whole word masking improves downstream performances for all tasks but NER as seen in Table TABREF46. NER is highly sensitive to capitalisation, prefixes, suffixes and other subword features that could be used by a model to correctly identify entity mentions. Thus the added information by learning the masking at a subword level rather than at whole-word level seems to have a detrimental effect on downstream NER results."
]
]
} | {
"question": [
"What is CamemBERT trained on?",
"Which tasks does CamemBERT not improve on?",
"What is the state of the art?",
"How much better was results of CamemBERT than previous results on these tasks?",
"Was CamemBERT compared against multilingual BERT on these tasks?",
"How long was CamemBERT trained?",
"What data is used for training CamemBERT?"
],
"question_id": [
"71f2b368228a748fd348f1abf540236568a61b07",
"d3d4eef047aa01391e3e5d613a0f1f786ae7cfc7",
"63723c6b398100bba5dc21754451f503cb91c9b8",
"5471766ca7c995dd7f0f449407902b32ac9db269",
"dc49746fc98647445599da9d17bc004bafdc4579",
"8720c096c8b990c7b19f956ee4930d5f2c019e2b",
"b573b36936ffdf1d70e66f9b5567511c989b46b2"
],
"nlp_background": [
"two",
"two",
"two",
"zero",
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
"",
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"unshuffled version of the French OSCAR corpus"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We use the unshuffled version of the French OSCAR corpus, which amounts to 138GB of uncompressed text and 32.7B SentencePiece tokens."
],
"highlighted_evidence": [
"We use the unshuffled version of the French OSCAR corpus, which amounts to 138GB of uncompressed text and 32.7B SentencePiece tokens."
]
}
],
"annotation_id": [
"e9e1b87a031a0b9b9f2f47eede9097c58a6b500f"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"its performance still lags behind models trained on the original English training set in the TRANSLATE-TEST setting, 81.2 vs. 82.91 for RoBERTa"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Experiments ::: Results ::: Natural Language Inference: XNLI",
"On the XNLI benchmark, CamemBERT obtains improved performance over multilingual language models on the TRANSLATE-TRAIN setting (81.2 vs. 80.2 for XLM) while using less than half the parameters (110M vs. 250M). However, its performance still lags behind models trained on the original English training set in the TRANSLATE-TEST setting, 81.2 vs. 82.91 for RoBERTa. It should be noted that CamemBERT uses far fewer parameters than RoBERTa (110M vs. 355M parameters)."
],
"highlighted_evidence": [
"Experiments ::: Results ::: Natural Language Inference: XNLI\nOn the XNLI benchmark, CamemBERT obtains improved performance over multilingual language models on the TRANSLATE-TRAIN setting (81.2 vs. 80.2 for XLM) while using less than half the parameters (110M vs. 250M). However, its performance still lags behind models trained on the original English training set in the TRANSLATE-TEST setting, 81.2 vs. 82.91 for RoBERTa. It should be noted that CamemBERT uses far fewer parameters than RoBERTa (110M vs. 355M parameters)."
]
}
],
"annotation_id": [
"15bd8457ee6ef5ee00c78810010f9b9613730b86"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "POS and DP task: CONLL 2018\nNER task: (no extensive work) Strong baselines CRF and BiLSTM-CRF\nNLI task: mBERT or XLM (not clear from text)",
"evidence": [
"We will compare to the more recent cross-lingual language model XLM BIBREF12, as well as the state-of-the-art CoNLL 2018 shared task results with predicted tokenisation and segmentation in an updated version of the paper.",
"In French, no extensive work has been done due to the limited availability of NER corpora. We compare our model with the strong baselines settled by BIBREF49, who trained both CRF and BiLSTM-CRF architectures on the FTB and enhanced them using heuristics and pretrained word embeddings.",
"In the TRANSLATE-TRAIN setting, we report the best scores from previous literature along with ours. BiLSTM-max is the best model in the original XNLI paper, mBERT which has been reported in French in BIBREF52 and XLM (MLM+TLM) is the best-presented model from BIBREF50."
],
"highlighted_evidence": [
"We will compare to the more recent cross-lingual language model XLM BIBREF12, as well as the state-of-the-art CoNLL 2018 shared task results with predicted tokenisation and segmentation in an updated version of the paper.",
"In French, no extensive work has been done due to the limited availability of NER corpora. We compare our model with the strong baselines settled by BIBREF49, who trained both CRF and BiLSTM-CRF architectures on the FTB and enhanced them using heuristics and pretrained word embeddings.",
"In the TRANSLATE-TRAIN setting, we report the best scores from previous literature along with ours. BiLSTM-max is the best model in the original XNLI paper, mBERT which has been reported in French in BIBREF52 and XLM (MLM+TLM) is the best-presented model from BIBREF50."
]
}
],
"annotation_id": [
"23b1324a33d14f2aac985bc2fca7d204607225ed"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"2.36 point increase in the F1 score with respect to the best SEM architecture",
"on the TRANSLATE-TRAIN setting (81.2 vs. 80.2 for XLM)",
"lags behind models trained on the original English training set in the TRANSLATE-TEST setting, 81.2 vs. 82.91 for RoBERTa",
"For POS tagging, we observe error reductions of respectively 0.71% for GSD, 0.81% for Sequoia, 0.7% for Spoken and 0.28% for ParTUT",
"For parsing, we observe error reductions in LAS of 2.96% for GSD, 3.33% for Sequoia, 1.70% for Spoken and 1.65% for ParTUT"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"CamemBERT also demonstrates higher performances than mBERT on those tasks. We observe a larger error reduction for parsing than for tagging. For POS tagging, we observe error reductions of respectively 0.71% for GSD, 0.81% for Sequoia, 0.7% for Spoken and 0.28% for ParTUT. For parsing, we observe error reductions in LAS of 2.96% for GSD, 3.33% for Sequoia, 1.70% for Spoken and 1.65% for ParTUT.",
"On the XNLI benchmark, CamemBERT obtains improved performance over multilingual language models on the TRANSLATE-TRAIN setting (81.2 vs. 80.2 for XLM) while using less than half the parameters (110M vs. 250M). However, its performance still lags behind models trained on the original English training set in the TRANSLATE-TEST setting, 81.2 vs. 82.91 for RoBERTa. It should be noted that CamemBERT uses far fewer parameters than RoBERTa (110M vs. 355M parameters).",
"For named entity recognition, our experiments show that CamemBERT achieves a slightly better precision than the traditional CRF-based SEM architectures described above in Section SECREF25 (CRF and Bi-LSTM+CRF), but shows a dramatic improvement in finding entity mentions, raising the recall score by 3.5 points. Both improvements result in a 2.36 point increase in the F1 score with respect to the best SEM architecture (BiLSTM-CRF), giving CamemBERT the state of the art for NER on the FTB. One other important finding is the results obtained by mBERT. Previous work with this model showed increased performance in NER for German, Dutch and Spanish when mBERT is used as contextualised word embedding for an NER-specific model BIBREF48, but our results suggest that the multilingual setting in which mBERT was trained is simply not enough to use it alone and fine-tune it for French NER, as it shows worse performance than even simple CRF models, suggesting that monolingual models could be better at NER."
],
"highlighted_evidence": [
"For POS tagging, we observe error reductions of respectively 0.71% for GSD, 0.81% for Sequoia, 0.7% for Spoken and 0.28% for ParTUT. For parsing, we observe error reductions in LAS of 2.96% for GSD, 3.33% for Sequoia, 1.70% for Spoken and 1.65% for ParTUT.",
"On the XNLI benchmark, CamemBERT obtains improved performance over multilingual language models on the TRANSLATE-TRAIN setting (81.2 vs. 80.2 for XLM) while using less than half the parameters (110M vs. 250M).",
"However, its performance still lags behind models trained on the original English training set in the TRANSLATE-TEST setting, 81.2 vs. 82.91 for RoBERTa.",
"Both improvements result in a 2.36 point increase in the F1 score with respect to the best SEM architecture (BiLSTM-CRF), giving CamemBERT the state of the art for NER on the FTB."
]
}
],
"annotation_id": [
"742351bb0ed07c34bbc4badb7fdd255761bd664a"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"To demonstrate the value of building a dedicated version of BERT for French, we first compare CamemBERT to the multilingual cased version of BERT (designated as mBERT). We then compare our models to UDify BIBREF36. UDify is a multitask and multilingual model based on mBERT that is near state-of-the-art on all UD languages including French for both POS tagging and dependency parsing."
],
"highlighted_evidence": [
"To demonstrate the value of building a dedicated version of BERT for French, we first compare CamemBERT to the multilingual cased version of BERT (designated as mBERT)."
]
}
],
"annotation_id": [
"92ee0c954a5d2197aa496d78771ac58396ee8035"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"825945c12f43ef2d07ba436f460fa58d3829dde3"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"unshuffled version of the French OSCAR corpus"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We use the unshuffled version of the French OSCAR corpus, which amounts to 138GB of uncompressed text and 32.7B SentencePiece tokens."
],
"highlighted_evidence": [
"We use the unshuffled version of the French OSCAR corpus, which amounts to 138GB of uncompressed text and 32.7B SentencePiece tokens."
]
}
],
"annotation_id": [
"02150b6860e8e3097f4f1cb1c60d42af03952c54"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Table 1: Sizes in Number of tokens, words and phrases of the 4 treebanks used in the evaluations of POS-tagging and dependency parsing.",
"Table 2: Final POS and dependency parsing scores of CamemBERT and mBERT (fine-tuned in the exact same conditions as CamemBERT), UDify as reported in the original paper on 4 French treebanks (French GSD, Spoken, Sequoia and ParTUT), reported on test sets (4 averaged runs) assuming gold tokenisation. Best scores in bold, second to best underlined.",
"Table 3: Accuracy of models for French on the XNLI test set. Best scores in bold, second to best underlined.",
"Table 4: Results for NER on the FTB. Best scores in bold, second to best underlined.",
"Table 5: Comparing subword and whole-word masking procedures on the validation sets of each task. Each score is an average of 4 runs with different random seeds. For POS tagging and Dependency parsing, we average the scores on the 4 treebanks.)"
],
"file": [
"3-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"5-Table4-1.png",
"10-Table5-1.png"
]
} |
2001.09899 | Vocabulary-based Method for Quantifying Controversy in Social Media | Identifying controversial topics is not only interesting from a social point of view, it also enables the application of methods to avoid the information segregation, creating better discussion contexts and reaching agreements in the best cases. In this paper we develop a systematic method for controversy detection based primarily on the jargon used by the communities in social media. Our method dispenses with the use of domain-specific knowledge, is language-agnostic, efficient and easy to apply. We perform an extensive set of experiments across many languages, regions and contexts, taking controversial and non-controversial topics. We find that our vocabulary-based measure performs better than state of the art measures that are based only on the community graph structure. Moreover, we shows that it is possible to detect polarization through text analysis. | {
"section_name": [
"Introduction",
"Related work",
"Method",
"Experiments",
"Experiments ::: Topic definition",
"Experiments ::: Datasets",
"Experiments ::: Results",
"Discussions",
"Discussions ::: Limitations",
"Discussions ::: Conclusions",
"Details on the discussions"
],
"paragraphs": [
[
"Controversy is a phenomenom with a high impact at various levels. It has been broadly studied from the perspective of different disciplines, ranging from the seminal analysis of the conflicts within the members of a karate club BIBREF0 to political issues in modern times BIBREF1, BIBREF2. The irruption of digital social networks BIBREF3 gave raise to new ways of intentionally intervening on them for taking some advantage BIBREF4, BIBREF5. Moreover highly contrasting points of view in some groups tend to provoke conflicts that lead to attacks from one community to the other by harassing, “brigading”, or “trolling” it BIBREF6. The existing literature shows different issues that controversy brings up such as splitting of communities, biased information, hateful discussions and attacks between groups, generally proposing ways to solve them. For example, Kumar, Srijan, et al. BIBREF6 analyze many techniques to defend us from attacks in Reddit while Stewart, et al. BIBREF4 insinuate that there was external interference in Twitter during the 2016 US presidential elections to benefit one candidate. Also, as shown in BIBREF7, detecting controversy could provide the basis to improve the “news diet\" of readers, offering the possibility to connect users with different points of views by recommending them new content to read BIBREF8.",
"Moreover, other studies on “bridging echo chambers” BIBREF9 and the positive effects of intergroup dialogue BIBREF10, BIBREF11 suggest that direct engagement could be effective for mitigating such conflicts. Therefore, easily and automatically identifying controversial topics could allow us to quickly implement different strategies for preventing miss-information, fights and bias. Quantifying the controversy is even more powerful, as it allows us to establish controversy levels, and in particular to classify controversial and non-controversial topics by establishing a threshold score that separates the two types of topics. With this aim, we propose in this work a systematic, language-agnostic method to quantify controversy on social networks taking tweet's content as root input. Our main contribution is a new vocabulary-based method that works in any language and equates the performance of state-of-the-art structure-based methods. Finally, controversy quantification through vocabulary analysis opens several research avenues to analyze whether polarization is being created, maintained or augmented by the ways of talking of each community.",
"Having this in mind and if we draw from the premise that when a discussion has a high controversy it is in general due to the presence of two principal communities fighting each other (or, conversely, that when there is no controversy there is just one principal community the members of which share a common point of view), we can measure the controversy by detecting if the discussion has one or two principal jargons in use. Our method is tested on Twitter datasets. This microblogging platform has been widely used to analyze discussions and polarization BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF2. It is a natural choice for these kind of problems, as it represents one of the main fora for public debate in online social media BIBREF15, it is a common destination for affiliative expressions BIBREF16 and is often used to report and read news about current events BIBREF17. An extra advantage of Twitter for this kind of studies is the availability of real-time data generated by millions of users. Other social media platforms offer similar data-sharing services, but few can match the amount of data and the accompanied documentation provided by Twitter. One last asset of Twitter for our work is given by retweets, whom typically indicate endorsement BIBREF18 and hence become a useful concept to model discussions as we can set “who is with who\". However, our method has a general approach and it could be used a priori in any social network. In this work we report excellent result tested on Twitter but in future work we are going to test it in other social networks.",
"Our paper is organized as follows: in Section SECREF2, we review related work. Section SECREF3 contains the detailed explanation of the pipeline we use for quantifying controversy of a topic, and each of its stages. In Section SECREF4 we report the results of an extensive empirical evaluation of the proposed measure of controversy. Finally, Section SECREF5 is devoted to discuss possible improvements and directions for future work, as well as lessons learned."
],
[
"Many previous works are dedicated to quantifying the polarization observed in online social networks and social media BIBREF1, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23. The main characteristic of those works is that the measures proposed are based on the structural characteristics of the underlying graph. Among them, we highlight the work of Garimella et al.BIBREF23 that presents an extensive comparison of controversy measures, different graph-building approaches, and data sources, achieving the best performance of all. In their research they propose different metrics to measure polarization on Twitter. Their techniques based on the structure of the endorsement graph can successfully detect whether a discussion (represented by a set of tweets), is controversial or not regardless of the context and most importantly, without the need of any domain expertise. They also consider two different methods to measure controversy based on the analysis of the posts contents, but both fail when used to create a measure of controversy.",
"Matakos et al. BIBREF24 develop a polarization index. Their measure captures the tendency of opinions to concentrate in network communities, creating echo-chambers. They obtain a good performance at identifying controversy by taking into account both the network structure and the existing opinions of users. However, they model opinions as positive or negative with a real number between -1 and 1. Their performance is good, but although it is an opinion-based method it is not a text-related one.Other recent works BIBREF25, BIBREF26, BIBREF27 have shown that communities may express themselves with different terms or ways of speaking, use different jargon, which in turn can be detected with the use of text-related techniques.",
"In his thesis BIBREF28, Jang explains controversy via generating a summary of two conflicting stances that make up the controversy. This work shows that a specific sub-set of tweets could represent the two opposite positions in a polarized debate.",
"A good tool to see how communities interact is ForceAtlas2 BIBREF29, a force-directed layout widely used for visualization. This layout has been recently found to be very useful at visualizing community interactions BIBREF30, as this algorithm will draw groups with little communication between them in different areas, whereas, if they have many interactions they will be drawn closer to each other. Therefore, whenever there is controversy the layout will show two well separated groups and will tend to show only one big community otherwise.",
"The method we propose to measure the controversy equates in accuracy the one developed by Garimella et al.BIBREF23 and improves considerably computing time and robustness wrt the amount of data needed to effectively apply it. Our method is also based on a graph approach but it has its main focus on the vocabulary. We first train an NLP classifier that estimates opinion polarity of main users, then we run label-propagation BIBREF31 on the endorsement graph to get polarity of the whole network. Finally we compute the controversy score through a computation inspired in Dipole Moment, a measure used in physics to estimate electric polarity on a system. In our experiments we use the same data-sets from other works BIBREF32, BIBREF23, BIBREF33 as well as other datasets that we collected by us using a similar criterion (described in Section SECREF4)."
],
[
"Our approach to measuring controversy is based on a systematic way of characterizing social media activity through its content. We employ a pipeline with five stages, namely graph building, community identification, model training, predicting and controversy measure. The final output of the pipeline is a value that measures how controversial a topic is, with higher values corresponding to higher degrees of controversy. The method is based on analysing posts content through Fasttext BIBREF34, a library for efficient learning of word representations and sentence classification developed by Facebook Research team. In short, our method works as follows: through Fasttext we train a language-agnostic model which can predict the community of many users by their jargon. Then we take there predictions and compute a score based on the physic notion Dipole Moment using a language approach to identify core or characteristic users and set the polarity trough them. We provide a detailed description of each stage in the following.",
"Graph Building",
"This paragraph provides details about the approach used to build graphs from raw data. As we said in Section SECREF1, we extract our discussions from Twitter. Our purpose is to build a conversation graph that represents activity related to a single topic of discussion -a debate about a specific event.",
"For each topic, we build a graph $G$ where we assign a vertex to each user who contributes to it and we add a directed edge from node $u$ to node $v$ whenever user $u$ retweets a tweet posted by $v$. Retweets typically indicate endorsement BIBREF18: users who retweet signal endorsement of the opinion expressed in the original tweet by propagating it further. Retweets are not constrained to occur only between users who are connected in Twitter's social network, but users are allowed to retweet posts generated by any other user. As many other works in literature BIBREF5, BIBREF35, BIBREF36, BIBREF37, BIBREF4, BIBREF2 we establish that one retweet among a pair of users are needed to define an edge between them.",
"Community Identification",
"To identify a community's jargon we need to be very accurate at defining its members. If we, in our will of finding two principal communities, force the partition of the graph in that precise number of communities, we may be adding noise in the jargon of the principal communities that are fighting each other. Because of that, we decide to cluster the graph trying two popular algorithms: Walktrap BIBREF38 and Louvain BIBREF39. Both are structure-based algorithms that have very good performance with respect to the Modularity Q measure. These techniques does not detect a fixed number of clusters; their output will depend on the Modularity Q optimization, resulting in less “noisy\" communities. The main differences between the two methods, in what regards our work, are that Louvain is a much faster heuristic algorithm but produces clusters with worse Modularity Q. Therefore, in order to analyze the trade-off between computing time and quality we decide to test both methods. At this step we want to capture the tweets of the principal communities to create the model that could differentiate them. Therefore, we take the two communities identified by the cluster algorithm that have the maximum number of users, and use them for the following step of our method.",
"Model Training",
"After detecting the principal communities we create our training dataset to feed the model. To do that, we extract the tweets of each cluster, we sanitize and we subject them to some transformations. First, we remove duplicate tweets -e.g. retweets without additional text. Second, we remove from the text of the tweets user names, links, punctuation, tabs, leading and lagging blanks, general spaces and “RT\" - the text that points that a tweet is in fact a retweet.",
"As shown in previous works, emojis are correlated with sentiment BIBREF40. Moreover, as we think that communities will express different sentiment during discussion, it is forseeable that emojis will play an important role as separators of tweets that differentiate between the two sides. Accordingly, we decide to add them to the train-set by translating each emoji into a different word. For example, the emoji :) will be translated into happy and :( into sad. Relations between emojis and words are defined in the R library textclean.",
"Finally, we group tweets by user concatenating them in one string and labeling them with the user's community, namely with tags C1 and C2, corresponding respectively to the biggest and second biggest groups. It is important to note that we take the same number of users of each community to prevent bias in the model. Thus, we use the number of users of the smallest principal community.",
"The train-set built that way is used to feed the model. As we said, we use Fasttext BIBREF34 to do this training. To define the values of the hyper-parameters we use the findings of BIBREF41. In their work they investigate the best hyper-parameters to train word embedding models using Fasttext BIBREF34 and Twitter data. We also change the default value of the hyper-parameter epoch to 20 instead of 5 because we want more convergence preventing as much as possible the variance between different training. These values could change in other context or social networks where we have more text per user or different discussion dynamics.",
"Predicting",
"The next stage consists of identifying the characteristic users of each side the discussion. These are the users that better represent the jargon of each side. To do that, tweets of the users belonging to the largest connected component of the graph are sanitized and transformed exactly as in the Training step.",
"We decide to restrict to the largest connected component because in all cases it contains more than 90% of the nodes. The remaining 10% of the users don't participate in the discussion from a collective point of view but rather in an isolated way and this kind of intervention does not add interesting information to our approach. Then, we remove from this component users with degree smaller or equal to 2 (i.e. users that were retweeted by another user or retweeted other person less than three times in total). Their participation in the discussion is marginal, consequently they are not relevant wrt controversy as they add more noise than information at measuring time. This step could be adjusted differently in a different social network. We name this result component root-graph.",
"Finally, let's see how we do classification. Considering that Fasttext returns for each classification both the predicted tag and the probability of the prediction, we classify each user of the resulting component by his sanitized tweets with our trained model, and take users that were tagged with a probability greater or equal than 0.9. These are the characteristic users that will be used in next step to compute the controversy measure.",
"Controversy Measure",
"This section describes the controversy measures used in this work. This computation is inspired in the measure presented by Morales et al. BIBREF2, and is based on the notion of dipole moment that has its origin in physics.",
"First, we assign to the characteristic users the probability returned by the model, negativizing them if the predicted tag was C2. Therefore, these users are assigned values in the set [-1,-0.9] $\\cup $ [0.9,1]. Then, we set values for the rest of the users of the root-graph by label-propagation BIBREF31 - an iterative algorithm to propagate values through a graph by node's neighborhood.",
"Let $n^{+}$ and $n^{-}$ be the number of vertices $V$ with positive and negative values, respectively, and $\\Delta A = \\dfrac{\\mid n^{+} - n^{-}\\mid }{\\mid V \\mid }$ the absolute difference of their normalized size. Moreover, let $gc^{+}$ ($gc^{-}$) be the average value among vertices $n^{+}$ ($n^{-}$) and set $\\tau $ as half their absolute difference, $\\tau = \\dfrac{\\mid gc^{+} - gc^{- }\\mid }{2}$. The dipole moment content controversy measure is defined as: $\\textit {DMC} = (1 -\\Delta A)\\tau $.",
"The rationale for this measure is that if the two sides are well separated, then label propagation will assign different extreme values to the two partitions, where users from one community will have values near to 1 and users from the other to -1, leading to higher values of the DMC measure. Note also that larger differences in the size of the two partitions (reflected in the value of $\\Delta A$) lead to smaller values for the measure, which takes values between zero and one."
],
[
"In this section we report the results obtained by running the above proposed method over different discussions."
],
[
"In the literature, a topic is often defined by a single hashtag. However, this might be too restrictive in many cases. In our approach, a topic is operationalized as an specific hashtags or key words. Sometimes a discussion in a particular moment could not have a defined hashtag but it could be around a certain keyword, i.e. a word or expression that is not specifically a hashtag but it is widely used in the topic. For example during the Brazilian presidential elections in 2018 we captured the discussion by the mentions to the word Bolsonaro, that is the principal candidate's surname.",
"Thus, for each topic we retrieve all the tweets that contain one of its hashtags or the keyword and that are generated during the observation window. We also ensure that the selected topic is associated with a large enough volume of activity."
],
[
"In this section we detail the discussions we use to test our metric and how we determine the ground truth (i.e. if the discussion is controversial or not). We use thirty different discussions that took place between March 2015 and June 2019, half of them with controversy and half without it. We considered discussions in four different languages: English, Portuguese, Spanish and French, occurring in five regions over the world: South and North America, Western Europe, Central and Southern Asia. We also studied these discussions taking first 140 characters and then 280 from each tweet to analyze the difference in performance and computing time wrt the length of the posts.",
"To define the amount of data needed to run our method we established that the Fasttext model has to predict at least one user of each community with a probability greater or equal than 0.9 during ten different trainings. If that is not the case, we are not able to use DPC method. This decision made us consider only a subset of the datasets used in BIBREF23, because due to the time elapsed since their work, many tweets had been deleted and consequently the volume of the data was not enough for our framework. To enlarge our experiment base we added new debates, more detailed information about each one is shown in Table TABREF24 in UNKREF6. To select new discussions and to determine if they are controversial or not we looked for topics widely covered by mainstream media, and that have generated ample discussion, both online and offline. For non-controversy discussions we focused on “soft news\" and entertainment, but also to events that, while being impactful and/or dramatic, did not generate large controversies. To validate that intuition, we manually checked a sample of tweets, being unable to identify any clear instance of controversy On the other side, for controversial debates we focused on political events such as elections, corruption cases or justice decisions.",
"To furtherly establish the presence of absence of controversy of our datasets, we visualized the corresponding networks through ForceAtlas2 BIBREF29. Figures FIGREF9 and FIGREF9 show an example of how non-controversial and controversial discussions look like respectively with ForceAtlas2 layout. As we can see in these figures, in a controversial discussion this layout tends to show two well separated groups while in a non-controversial one it tends to be only one big group. More information on the discussions is given in Table TABREF24.",
"To avoid potential overfitting, we use only twelve graphs as testbed during the development of the measures, half of them controversial (netanyahu, ukraine, @mauriciomacri 1-11 Jan, Kavanaugh 3 Oct, @mauriciomacri 11-18 Mar, Bolsonaro 27 Oct) and half non-controversial (sxsw, germanwings, onedirection, ultralive, nepal, mothersday). This procedure resembles a 40/60% train/test split in traditional machine learning applications.",
"Some of the discussions we consider refer to the same topics but in different periods of time. We needed to split them because our computing infrastructure does not allow us to compute such an enormous amount of data. However, being able to estimate controversy with only a subset of the discussion is an advantage, because discussions could take many days or months and we want to identify controversy as soon as possible, without the need of downloading the whole discussion. Moreover, for very long lasting discussions in social networks gathering the whole data would be impractical for any method.",
""
],
[
"Training a Fasttext model is not a deterministic process, as different runs could yield different results even using the same training set in each one. To analyze if these differences are significant, we decide to compute 20 scores for each discussion. The standard deviations among these 20 scores were low in all cases, with mean 0.01 and maximum 0.05. Consequently, we decided to report in this paper the average between the 20 scores, in practice taking the average between 5 runs would be enough. Figure FIGREF18 reports the scores computed by our measure in each topic for the two cluster methods. The beanplot shows the estimated probability density function for a measure computed on the topics, the individual observations are shown as small white lines in a one-dimensional scatter plot, and the median as a longer black line. The beanplot is divided into two groups, one for controversial topics (left/dark) and one for non-controversial ones (right/light). Hence, the black group shows the score distribution over controversial discussions and the white group over non-controversial ones. A larger separation of the two distributions indicates that the measure is better at capturing the characteristics of controversial topics, because a good separation allows to establish a threshold in the score that separates controversial and non-controversial discussions.",
"As we may see in the figure, the medians are well separated in both cases, with little overlapping. To better quantify this overlap we measure the sensitivity BIBREF42 of these predictions by measuring the area under the ROC curve (AUC ROC), obtaining a value of 0.98 for Walktrap clustering and 0.967 for Louvain (where 1 represents a perfect separation and 0.5 means that they are indistinguishable).",
"As Garimella et al. BIBREF23 have made their code public , we reproduced their best method Randomwalk on our datasets and measured the AUC ROC, obtaining a score of 0.935. An interesting finding was that their method had a poor performance over their own datasets. This was due to the fact (already explained in Section SECREF4) that it was not possible to retrieve the complete discussions, moreover, in no case could we restore more than 50% of the tweets. So we decided to remove these discussions and measure again the AUC ROC of this method, obtaining a 0.99 value. Our hypothesis is that the performance of that method was seriously hurt by the incompleteness of the data. We also tested our method on these datasets, obtaining a 0.99 AUC ROC with Walktrap and 0.989 with Louvain clustering.",
"We conclude that our method works better, as in practice both approaches show same performances -specially with Walktrap, but in presence of incomplete information our measure is more robust. The performance of Louvain is slightly worse but, as we mentioned in Section SECREF3, this method is much faster. Therefore, we decided to compare the running time of our method with both clustering techniques and also with the Randomwalk algorithm. In figure FIGREF18 we can see the distribution of running times of all techniques through box plots. Both versions of our method are faster than Randomwalk, while Louvain is faster than Walktrap.",
"We now analyze the impact of the length of the considered text in our method. Figure FIGREF18 depicts the results of similar experiment as Figure FIGREF18, but considering only 140 characters per tweet. As we may see, here the overlapping is bigger, having an AUC of 0.88. As for the impact on computing time, we observe that despite of the results of BIBREF34 that reported a complexity of O(h $log_{2}$(k)) at training and test tasks, in practice we observed a linear growth. We measured the running times of the training and predicting phases (the two text-related phases of our method), the resulting times are reported in figure FIGREF18, which shows running time as a function of the text-size. We include also the best estimated function that approximate computing time as a function of text-set size. As it may be seen, time grows almost linearly, ranging from 30 seconds for a set of 111 KB to 84 seconds for a set of 11941 KB. Finally, we measured running times for the whole method over each dataset with 280 characters. Times were between 170 and 2467 seconds with a mean of 842, making it in practice a reasonable amount of time."
],
[
"The task we address in this work is certainly not an easy one, and our study has some limitations, which we discuss in this section. Our work lead us to some conclusions regarding the overall possibility of measuring controversy through text, and what aspects need to be considered to deepen our work."
],
[
"As our approach to controversy is similar to that of Garimella et al. BIBREF23, we share some of their limitations with respect to several aspects: Evaluation -difficulties to establish ground-truth, Multisided controversies -controversy with more than two sides, Choice of data - manually pick topics, and Overfitting - small set of experiments. Although we have more discussions, it is still small set from statistical point of view. Apart from that, our language-based approach has other limitations which we mention in the following, together with their solutions or mitigation.",
"Data-size. Training an NLP model that can predict tags with a probability greater or equal than 0.9 requires significant amount of text, therefore our method works only for “big\" discussions. Most interesting controversies are those that have consequence at a society level, in general big enough for our method.",
"Multi-language discussions. When multiple languages are participating in a discussion it is common that users tend to retweet more tweets in their own language, creating sub-communities. In this cases our model will tend to predict higher controversy scores. This is the case for example of #germanwings, where users tweet in English, German and Spanish and it has the highest score in no-controversial topics. However, the polarization that we tackle in this work is normally part of a society cell (a nation, a city, etc.), and thus developed in just one language. We think that limiting the effectiveness of our analysis to single-language discussions is not a serious limitation.",
"Twitter only. Our findings are based on datasets coming from Twitter. While this is certainly a limitation, Twitter is one of the main venues for online public discussion, and one of the few for which data is available. Hence, Twitter is a natural choice. However, Twitter's characteristic limit of 280 characters per message (140 till short time ago) is an intrinsic limitation of that network. We think that in other social networks as Facebook or Reddit our method will work even better, as having more text per user could redound on a better NLP model as we verified comparing the results with 140 and 280 characters per post."
],
[
"In this article, we introduced the first large-scale systematic method for quantifying controversy in social media through content. We have shown that this method works on Spanish, English, French and Portuguese, it is context-agnostic and does not require the intervention of a domain expert.",
"We have compared its performance with state-of-the-art structure-based controversy measures showing that they have same performance and it is more robust. We also have shown that more text implies better performance and without significantly increasing computing time, therefore, it could be used in other contexts such as other social networks like Reddit or Facebook and we are going to test it in future works.",
"Training the model is not an expensive task since Fasttext has a good performance at this. However, the best performance for detecting principal communities is obtained by Walktrap. The complexity of that algorithm is O(m$n^2$)BIBREF38, where $m$ and $n$ are the number of edges and vertices respectively. This makes this method rather expensive to compute on big networks. Nevertheless, we have shown that with Louvain the method still obtains a very similar AUC ROC (0.99 with Walktrap and 0.989 with Louvain). With incomplete information its performance gets worse but it is still good (0.96) and better than previous state of the art.",
"This work opens several avenues for future research. One is identifying what words, semantics/concepts or language expressions make differ one community from the other. There are various ways to do this, for instance through the word-embbedings that Fasttext returns after training BIBREF34. Also we could use interpretability techniques on machine learning models BIBREF43. Finally, we could try other techniques for measuring controversy through text, using another NLP model as pre-trained neural network BERT BIBREF44 or, in a completely different approach measuring the dispersion index of the discussions word-embbedings BIBREF25. We are currently starting to follow this direction."
],
[
"F"
]
]
} | {
"question": [
"What are the state of the art measures?",
"What controversial topics are experimented with?",
"What datasets did they use?",
"What social media platform is observed?",
"How many languages do they experiment with?"
],
"question_id": [
"bf25a202ac713a34e09bf599b3601058d9cace46",
"abebf9c8c9cf70ae222ecb1d3cabf8115b9fc8ac",
"2df910c9806f0c379d7bb1bc2be2610438e487dc",
"a2a3af59f3f18a28eb2ca7055e1613948f395052",
"d92f1c15537b33b32bfc436e6d017ae7d9d6c29a"
],
"nlp_background": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"search_query": [
"social media",
"social media",
"social media",
"social media",
"social media"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Randomwalk",
"Walktrap",
"Louvain clustering"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"As Garimella et al. BIBREF23 have made their code public , we reproduced their best method Randomwalk on our datasets and measured the AUC ROC, obtaining a score of 0.935. An interesting finding was that their method had a poor performance over their own datasets. This was due to the fact (already explained in Section SECREF4) that it was not possible to retrieve the complete discussions, moreover, in no case could we restore more than 50% of the tweets. So we decided to remove these discussions and measure again the AUC ROC of this method, obtaining a 0.99 value. Our hypothesis is that the performance of that method was seriously hurt by the incompleteness of the data. We also tested our method on these datasets, obtaining a 0.99 AUC ROC with Walktrap and 0.989 with Louvain clustering."
],
"highlighted_evidence": [
"As Garimella et al. BIBREF23 have made their code public , we reproduced their best method Randomwalk on our datasets and measured the AUC ROC, obtaining a score of 0.935. An interesting finding was that their method had a poor performance over their own datasets. This was due to the fact (already explained in Section SECREF4) that it was not possible to retrieve the complete discussions, moreover, in no case could we restore more than 50% of the tweets. So we decided to remove these discussions and measure again the AUC ROC of this method, obtaining a 0.99 value. Our hypothesis is that the performance of that method was seriously hurt by the incompleteness of the data. We also tested our method on these datasets, obtaining a 0.99 AUC ROC with Walktrap and 0.989 with Louvain clustering."
]
}
],
"annotation_id": [
"021aa3828c5d37a1f59944de9dd3cc02fa75838e"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"political events such as elections, corruption cases or justice decisions"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"To define the amount of data needed to run our method we established that the Fasttext model has to predict at least one user of each community with a probability greater or equal than 0.9 during ten different trainings. If that is not the case, we are not able to use DPC method. This decision made us consider only a subset of the datasets used in BIBREF23, because due to the time elapsed since their work, many tweets had been deleted and consequently the volume of the data was not enough for our framework. To enlarge our experiment base we added new debates, more detailed information about each one is shown in Table TABREF24 in UNKREF6. To select new discussions and to determine if they are controversial or not we looked for topics widely covered by mainstream media, and that have generated ample discussion, both online and offline. For non-controversy discussions we focused on “soft news\" and entertainment, but also to events that, while being impactful and/or dramatic, did not generate large controversies. To validate that intuition, we manually checked a sample of tweets, being unable to identify any clear instance of controversy On the other side, for controversial debates we focused on political events such as elections, corruption cases or justice decisions."
],
"highlighted_evidence": [
"To enlarge our experiment base we added new debates, more detailed information about each one is shown in Table TABREF24 in UNKREF6. To select new discussions and to determine if they are controversial or not we looked for topics widely covered by mainstream media, and that have generated ample discussion, both online and offline. For non-controversy discussions we focused on “soft news\" and entertainment, but also to events that, while being impactful and/or dramatic, did not generate large controversies. To validate that intuition, we manually checked a sample of tweets, being unable to identify any clear instance of controversy On the other side, for controversial debates we focused on political events such as elections, corruption cases or justice decisions."
]
}
],
"annotation_id": [
"e95447609cf18e68e21e14945f90f861e6151707"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"BIBREF32, BIBREF23, BIBREF33",
"discussions in four different languages: English, Portuguese, Spanish and French, occurring in five regions over the world: South and North America, Western Europe, Central and Southern Asia. "
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The method we propose to measure the controversy equates in accuracy the one developed by Garimella et al.BIBREF23 and improves considerably computing time and robustness wrt the amount of data needed to effectively apply it. Our method is also based on a graph approach but it has its main focus on the vocabulary. We first train an NLP classifier that estimates opinion polarity of main users, then we run label-propagation BIBREF31 on the endorsement graph to get polarity of the whole network. Finally we compute the controversy score through a computation inspired in Dipole Moment, a measure used in physics to estimate electric polarity on a system. In our experiments we use the same data-sets from other works BIBREF32, BIBREF23, BIBREF33 as well as other datasets that we collected by us using a similar criterion (described in Section SECREF4).",
"In this section we detail the discussions we use to test our metric and how we determine the ground truth (i.e. if the discussion is controversial or not). We use thirty different discussions that took place between March 2015 and June 2019, half of them with controversy and half without it. We considered discussions in four different languages: English, Portuguese, Spanish and French, occurring in five regions over the world: South and North America, Western Europe, Central and Southern Asia. We also studied these discussions taking first 140 characters and then 280 from each tweet to analyze the difference in performance and computing time wrt the length of the posts."
],
"highlighted_evidence": [
"In our experiments we use the same data-sets from other works BIBREF32, BIBREF23, BIBREF33 as well as other datasets that we collected by us using a similar criterion (described in Section SECREF4).",
"We use thirty different discussions that took place between March 2015 and June 2019, half of them with controversy and half without it. We considered discussions in four different languages: English, Portuguese, Spanish and French, occurring in five regions over the world: South and North America, Western Europe, Central and Southern Asia. We also studied these discussions taking first 140 characters and then 280 from each tweet to analyze the difference in performance and computing time wrt the length of the posts."
]
}
],
"annotation_id": [
"5aa0165df4a2214113eaab31c703a11c25e22359"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Twitter"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Having this in mind and if we draw from the premise that when a discussion has a high controversy it is in general due to the presence of two principal communities fighting each other (or, conversely, that when there is no controversy there is just one principal community the members of which share a common point of view), we can measure the controversy by detecting if the discussion has one or two principal jargons in use. Our method is tested on Twitter datasets. This microblogging platform has been widely used to analyze discussions and polarization BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF2. It is a natural choice for these kind of problems, as it represents one of the main fora for public debate in online social media BIBREF15, it is a common destination for affiliative expressions BIBREF16 and is often used to report and read news about current events BIBREF17. An extra advantage of Twitter for this kind of studies is the availability of real-time data generated by millions of users. Other social media platforms offer similar data-sharing services, but few can match the amount of data and the accompanied documentation provided by Twitter. One last asset of Twitter for our work is given by retweets, whom typically indicate endorsement BIBREF18 and hence become a useful concept to model discussions as we can set “who is with who\". However, our method has a general approach and it could be used a priori in any social network. In this work we report excellent result tested on Twitter but in future work we are going to test it in other social networks."
],
"highlighted_evidence": [
"Our method is tested on Twitter datasets. This microblogging platform has been widely used to analyze discussions and polarization BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF2. It is a natural choice for these kind of problems, as it represents one of the main fora for public debate in online social media BIBREF15, it is a common destination for affiliative expressions BIBREF16 and is often used to report and read news about current events BIBREF17. An extra advantage of Twitter for this kind of studies is the availability of real-time data generated by millions of users. Other social media platforms offer similar data-sharing services, but few can match the amount of data and the accompanied documentation provided by Twitter. One last asset of Twitter for our work is given by retweets, whom typically indicate endorsement BIBREF18 and hence become a useful concept to model discussions as we can set “who is with who\". However, our method has a general approach and it could be used a priori in any social network. In this work we report excellent result tested on Twitter but in future work we are going to test it in other social networks."
]
}
],
"annotation_id": [
"7abef0ee7b3d611a9c4f1b04559a6907661d587b"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"four different languages: English, Portuguese, Spanish and French"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In this section we detail the discussions we use to test our metric and how we determine the ground truth (i.e. if the discussion is controversial or not). We use thirty different discussions that took place between March 2015 and June 2019, half of them with controversy and half without it. We considered discussions in four different languages: English, Portuguese, Spanish and French, occurring in five regions over the world: South and North America, Western Europe, Central and Southern Asia. We also studied these discussions taking first 140 characters and then 280 from each tweet to analyze the difference in performance and computing time wrt the length of the posts."
],
"highlighted_evidence": [
"In this section we detail the discussions we use to test our metric and how we determine the ground truth (i.e. if the discussion is controversial or not). We use thirty different discussions that took place between March 2015 and June 2019, half of them with controversy and half without it. We considered discussions in four different languages: English, Portuguese, Spanish and French, occurring in five regions over the world: South and North America, Western Europe, Central and Southern Asia. We also studied these discussions taking first 140 characters and then 280 from each tweet to analyze the difference in performance and computing time wrt the length of the posts.\n\n"
]
}
],
"annotation_id": [
"6ed3e5103df141dbe78c789d25336f8c67afba50"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
]
} | {
"caption": [
"Fig. 1",
"Fig. 2",
"Table 1: Datasets statistics, the top group represent controversial topics, while the bottom one represent non-controversial ones"
],
"file": [
"8-Figure1-1.png",
"11-Figure2-1.png",
"15-Table1-1.png"
]
} |
1710.01492 | Semantic Sentiment Analysis of Twitter Data | Internet and the proliferation of smart mobile devices have changed the way information is created, shared, and spreads, e.g., microblogs such as Twitter, weblogs such as LiveJournal, social networks such as Facebook, and instant messengers such as Skype and WhatsApp are now commonly used to share thoughts and opinions about anything in the surrounding world. This has resulted in the proliferation of social media content, thus creating new opportunities to study public opinion at a scale that was never possible before. Naturally, this abundance of data has quickly attracted business and research interest from various fields including marketing, political science, and social studies, among many others, which are interested in questions like these: Do people like the new Apple Watch? Do Americans support ObamaCare? How do Scottish feel about the Brexit? Answering these questions requires studying the sentiment of opinions people express in social media, which has given rise to the fast growth of the field of sentiment analysis in social media, with Twitter being especially popular for research due to its scale, representativeness, variety of topics discussed, as well as ease of public access to its messages. Here we present an overview of work on sentiment analysis on Twitter. | {
"section_name": [
"Synonyms",
"Glossary",
"Definition",
"Introduction",
"Key Points",
"Historical Background",
"Variants of the Task at SemEval",
"Features and Learning",
"Sentiment Polarity Lexicons",
"Key Applications",
"Future Directions",
"Cross-References",
"Recommended Reading"
],
"paragraphs": [
[
"Microblog sentiment analysis; Twitter opinion mining"
],
[
"Sentiment Analysis: This is text analysis aiming to determine the attitude of a speaker or a writer with respect to some topic or the overall contextual polarity of a piece of text."
],
[
"Sentiment analysis on Twitter is the use of natural language processing techniques to identify and categorize opinions expressed in a tweet, in order to determine the author's attitude toward a particular topic or in general. Typically, discrete labels such as positive, negative, neutral, and objective are used for this purpose, but it is also possible to use labels on an ordinal scale, or even continuous numerical values."
],
[
"Internet and the proliferation of smart mobile devices have changed the way information is created, shared, and spreads, e.g., microblogs such as Twitter, weblogs such as LiveJournal, social networks such as Facebook, and instant messengers such as Skype and WhatsApp are now commonly used to share thoughts and opinions about anything in the surrounding world. This has resulted in the proliferation of social media content, thus creating new opportunities to study public opinion at a scale that was never possible before.",
"Naturally, this abundance of data has quickly attracted business and research interest from various fields including marketing, political science, and social studies, among many others, which are interested in questions like these: Do people like the new Apple Watch? What do they hate about iPhone6? Do Americans support ObamaCare? What do Europeans think of Pope's visit to Palestine? How do we recognize the emergence of health problems such as depression? Do Germans like how Angela Merkel is handling the refugee crisis in Europe? What do republican voters in USA like/hate about Donald Trump? How do Scottish feel about the Brexit?",
"Answering these questions requires studying the sentiment of opinions people express in social media, which has given rise to the fast growth of the field of sentiment analysis in social media, with Twitter being especially popular for research due to its scale, representativeness, variety of topics discussed, as well as ease of public access to its messages BIBREF0 , BIBREF1 .",
"Despite all these opportunities, the rise of social media has also presented new challenges for natural language processing (NLP) applications, which had largely relied on NLP tools tuned for formal text genres such as newswire, and thus were not readily applicable to the informal language and style of social media. That language proved to be quite challenging with its use of creative spelling and punctuation, misspellings, slang, new words, URLs, and genre-specific terminology and abbreviations, e.g., RT for re-tweet and #hashtags. In addition to the genre difference, there is also a difference in length: social media messages are generally short, often length-limited by design as in Twitter, i.e., a sentence or a headline rather than a full document. How to handle such challenges has only recently been the subject of thorough research BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 ."
],
[
"Sentiment analysis has a wide number of applications in areas such as market research, political and social sciences, and for studying public opinion in general, and Twitter is one of the most commonly-used platforms for this. This is due to its streaming nature, which allows for real-time analysis, to its social aspect, which encourages people to share opinions, and to the short size of the tweets, which simplifies linguistic analysis.",
"There are several formulations of the task of Sentiment Analysis on Twitter that look at different sizes of the target (e.g., at the level of words vs. phrases vs. tweets vs. sets of tweets), at different types of semantic targets (e.g., aspect vs. topic vs. overall tweet), at the explicitness of the target (e.g., sentiment vs. stance detection), at the scale of the expected label (2-point vs. 3-point vs. ordinal), etc. All these are explored at SemEval, the International Workshop on Semantic Evaluation, which has created a number of benchmark datasets and has enabled direct comparison between different systems and approaches, both as part of the competition and beyond.",
"Traditionally, the task has been addressed using supervised and semi-supervised methods, as well as using distant supervision, with the most important resource being sentiment polarity lexicons, and with feature-rich approaches as the dominant research direction for years. With the recent rise of deep learning, which in many cases eliminates the need for any explicit feature modeling, the importance of both lexicons and features diminishes, while at the same time attention is shifting towards learning from large unlabeled data, which is needed to train the high number of parameters of such complex models. Finally, as methods for sentiment analysis mature, more attention is also being paid to linguistic structure and to multi-linguality and cross-linguality."
],
[
"Sentiment analysis emerged as a popular research direction in the early 2000s. Initially, it was regarded as standard document classification into topics such as business, sport, and politics BIBREF10 . However, researchers soon realized that it was quite different from standard document classification BIBREF11 , and that it crucially needed external knowledge in the form of sentiment polarity lexicons.",
"Around the same time, other researchers realized the importance of external sentiment lexicons, e.g., Turney BIBREF12 proposed an unsupervised approach to learn the sentiment orientation of words/phrases: positive vs. negative. Later work studied the linguistic aspects of expressing opinions, evaluations, and speculations BIBREF13 , the role of context in determining the sentiment orientation BIBREF14 , of deeper linguistic processing such as negation handling BIBREF15 , of finer-grained sentiment distinctions BIBREF16 , of positional information BIBREF17 , etc. Moreover, it was recognized that in many cases, it is crucial to know not just the polarity of the sentiment but also the topic toward which this sentiment is expressed BIBREF18 .",
"Until the rise of social media, research on opinion mining and sentiment analysis had focused primarily on learning about the language of sentiment in general, meaning that it was either genre-agnostic BIBREF19 or focused on newswire texts BIBREF20 and customer reviews (e.g., from web forums), most notably about movies BIBREF10 and restaurants BIBREF21 but also about hotels, digital cameras, cell phones, MP3 and DVD players BIBREF22 , laptops BIBREF21 , etc. This has given rise to several resources, mostly word and phrase polarity lexicons, which have proven to be very valuable for their respective domains and types of texts, but less useful for short social media messages.",
"Later, with the emergence of social media, sentiment analysis in Twitter became a hot research topic. Unfortunately, research in that direction was hindered by the unavailability of suitable datasets and lexicons for system training, development, and testing. While some Twitter-specific resources were developed, initially they were either small and proprietary, such as the i-sieve corpus BIBREF6 , were created only for Spanish like the TASS corpus BIBREF23 , or relied on noisy labels obtained automatically, e.g., based on emoticons and hashtags BIBREF24 , BIBREF25 , BIBREF10 .",
"This situation changed with the shared task on Sentiment Analysis on Twitter, which was organized at SemEval, the International Workshop on Semantic Evaluation, a semantic evaluation forum previously known as SensEval. The task ran in 2013, 2014, 2015, and 2016, attracting over 40 participating teams in all four editions. While the focus was on general tweets, the task also featured out-of-domain testing on SMS messages, LiveJournal messages, as well as on sarcastic tweets.",
"SemEval-2013 Task 2 BIBREF26 and SemEval-2014 Task 9 BIBREF27 focused on expression-level and message-level polarity. SemEval-2015 Task 10 BIBREF28 , BIBREF29 featured topic-based message polarity classification on detecting trends toward a topic and on determining the out-of-context (a priori) strength of association of Twitter terms with positive sentiment. SemEval-2016 Task 4 BIBREF30 introduced a 5-point scale, which is used for human review ratings on popular websites such as Amazon, TripAdvisor, Yelp, etc.; from a research perspective, this meant moving from classification to ordinal regression. Moreover, it focused on quantification, i.e., determining what proportion of a set of tweets on a given topic are positive/negative about it. It also featured a 5-point scale ordinal quantification subtask BIBREF31 .",
"Other related tasks have explored aspect-based sentiment analysis BIBREF32 , BIBREF33 , BIBREF21 , sentiment analysis of figurative language on Twitter BIBREF34 , implicit event polarity BIBREF35 , stance in tweets BIBREF36 , out-of-context sentiment intensity of phrases BIBREF37 , and emotion detection BIBREF38 . Some of these tasks featured languages other than English."
],
[
"Tweet-level sentiment. The simplest and also the most popular task of sentiment analysis on Twitter is to determine the overall sentiment expressed by the author of a tweet BIBREF30 , BIBREF28 , BIBREF26 , BIBREF29 , BIBREF27 . Typically, this means choosing one of the following three classes to describe the sentiment: Positive, Negative, and Neutral. Here are some examples:",
"Positive: @nokia lumia620 cute and small and pocket-size, and available in the brigh test colours of day! #lumiacaption",
"Negative: I hate tweeting on my iPhone 5 it's so small :(",
"Neutral: If you work as a security in a samsung store...Does that make you guardian of the galaxy??",
"Sentiment polarity lexicons. Naturally, the overall sentiment in a tweet can be determined based on the sentiment-bearing words and phrases it contains as well as based on emoticons such as ;) and:(. For this purpose, researchers have been using lexicons of sentiment-bearing words. For example, cute is a positive word, while hate is a negative one, and the occurrence of these words in (1) and (2) can help determine the overall polarity of the respective tweet. We will discuss these lexicons in more detail below.",
"Prior sentiment polarity of multi-word phrases. Unfortunately, many sentiment-bearing words are not universally good or universally bad. For example, the polarity of an adjective could depend on the noun it modifies, e.g., hot coffee and unpredictable story express positive sentiment, while hot beer and unpredictable steering are negative. Thus, determining the out-of-context (a priori) strength of association of Twitter terms, especially multi-word terms, with positive/negative sentiment is an active research direction BIBREF28 , BIBREF29 .",
"Phrase-level polarity in context. Even when the target noun is the same, the polarity of the modifying adjective could be different in different tweets, e.g., small is positive in (1) but negative in (2), even though they both refer to a phone. Thus, there has been research in determining the sentiment polarity of a term in the context of a tweet BIBREF26 , BIBREF29 , BIBREF27 .",
"Sarcasm. Going back to tweet-level sentiment analysis, we should mention sarcastic tweets, which are particularly challenging as the sentiment they express is often the opposite of what the words they contain suggest BIBREF4 , BIBREF29 , BIBREF27 . For example, (4) and (5) express a negative sentiment even though they contain positive words and phrases such as thanks, love, and boosts my morale.",
"Negative: Thanks manager for putting me on the schedule for Sunday",
"Negative: I just love missing my train every single day. Really boosts my morale.",
"Sentiment toward a topic. Even though tweets are short, as they are limited to 140 characters by design (even though this was relaxed a bit as of September 19, 2016, and now media attachments such as images, videos, polls, etc., and quoted tweets no longer reduce the character count), they are still long enough to allow the tweet's author to mention several topics and to express potentially different sentiment toward each of them. A topic can be anything that people express opinions about, e.g., a product (e.g., iPhone6), a political candidate (e.g., Donald Trump), a policy (e.g., Obamacare), an event (e.g., Brexit), etc. For example, in (6) the author is positive about Donald Trump but negative about Hillary Clinton. A political analyzer would not be interested so much in the overall sentiment expressed in the tweet (even though one could argue that here it is positive overall), but rather in the sentiment with respect to a topic of his/her interest of study.",
"As a democrat I couldnt ethically support Hillary no matter who was running against her. Just so glad that its Trump, just love the guy!",
"(topic: Hillary INLINEFORM0 Negative)",
"(topic: Trump INLINEFORM0 Positive)",
"Aspect-based sentiment analysis. Looking again at (1) and (2), we can say that the sentiment is not about the phone (lumia620 and iPhone 5, respectively), but rather about some specific aspect thereof, namely, size. Similarly, in (7) instead of sentiment toward the topic lasagna, we can see sentiment toward two aspects thereof: quality (Positive sentiment) and quantity (Negative sentiment). Aspect-based sentiment analysis is an active research area BIBREF32 , BIBREF33 , BIBREF21 .",
"The lasagna is delicious but do not come here on an empty stomach.",
"Stance detection. A task related to, but arguably different in some respect from sentiment analysis, is that of stance detection. The goal here is to determine whether the author of a piece of text is in favor of, against, or neutral toward a proposition or a target BIBREF36 . For example, in (8) the author has a negative stance toward the proposition women have the right to abortion, even though the target is not mentioned at all. Similarly, in (9§) the author expresses a negative sentiment toward Mitt Romney, from which one can imply that s/he has a positive stance toward the target Barack Obama.",
"A foetus has rights too! Make your voice heard.",
"(Target: women have the right to abortion INLINEFORM0 Against)",
"All Mitt Romney cares about is making money for the rich.",
"(Target: Barack Obama INLINEFORM0 InFavor)",
"Ordinal regression. The above tasks were offered in different granularities, e.g., 2-way (Positive, Negative), 3-way (Positive, Neutral, Negative), 4-way (Positive, Neutral, Negative, Objective), 5-way (HighlyPositive, Positive, Neutral, Negative, HighlyNegative), and sometimes even 11-way BIBREF34 . It is important to note that the 5-way and the 11-way scales are ordinal, i.e., the classes can be associated with numbers, e.g., INLINEFORM0 2, INLINEFORM1 1, 0, 1, and 2 for the 5-point scale. This changes the machine learning task as not all mistakes are equal anymore BIBREF16 . For example, misclassifying a HighlyNegative example as HighlyPositive is a bigger mistake than misclassifying it as Negative or as Neutral. From a machine learning perspective, this means moving from classification to ordinal regression. This also requires different evaluation measures BIBREF30 .",
"Quantification. Practical applications are hardly ever interested in the sentiment expressed in a specific tweet. Rather, they look at estimating the prevalence of positive and negative tweets about a given topic in a set of tweets from some time interval. Most (if not all) tweet sentiment classification studies conducted within political science BIBREF39 , BIBREF40 , BIBREF41 , economics BIBREF42 , BIBREF7 , social science BIBREF43 , and market research BIBREF44 , BIBREF45 use Twitter with an interest in aggregate data and not in individual classifications. Thus, some tasks, such as SemEval-2016 Task 4 BIBREF30 , replace classification with class prevalence estimation, which is also known as quantification in data mining and related fields. Note that quantification is not a mere byproduct of classification, since a good classifier is not necessarily a good quantifier, and vice versa BIBREF46 . Finally, in case of multiple labels on an ordinal scale, we have yet another machine learning problem: ordinal quantification. Both versions of quantification require specific evaluation measures and machine learning algorithms."
],
[
"Pre-processing. Tweets are subject to standard preprocessing steps for text such as tokenization, stemming, lemmatization, stop-word removal, and part-of-speech tagging. Moreover, due to their noisy nature, they are also processed using some Twitter-specific techniques such as substitution/removal of URLs, of user mentions, of hashtags, and of emoticons, spelling correction, elongation normalization, abbreviation lookup, punctuation removal, detection of amplifiers and diminishers, negation scope detection, etc. For this, one typically uses Twitter-specific NLP tools such as part-of-speech and named entity taggers, syntactic parsers, etc. BIBREF47 , BIBREF48 , BIBREF49 .",
"Negation handling. Special handling is also done for negation. The most popular approach to negation handling is to transform any word that appeared in a negation context by adding a suffix _NEG to it, e.g., good would become good_NEG BIBREF50 , BIBREF10 . A negated context is typically defined as a text span between a negation word, e.g., no, not, shouldn't, and a punctuation mark or the end of the message. Alternatively, one could flip the polarity of sentiment words, e.g., the positive word good would become negative when negated. It has also been argued BIBREF51 that negation affects different words differently, and thus it was also proposed to build and use special sentiment polarity lexicons for words in negation contexts BIBREF52 .",
"Features. Traditionally, systems for Sentiment Analysis on Twitter have relied on handcrafted features derived from word-level (e.g., great, freshly roasted coffee, becoming president) and character-level INLINEFORM0 -grams (e.g., bec, beco, comin, oming), stems (e.g., becom), lemmata (e.g., become, roast), punctuation (e.g., exclamation and question marks), part-of-speech tags (e.g., adjectives, adverbs, verbs, nouns), word clusters (e.g., probably, probly, and maybe could be collapsed to the same word cluster), and Twitter-specific encodings such as emoticons (e.g., ;), :D), hashtags (#Brexit), user tags (e.g., @allenai_org), abbreviations (e.g., RT, BTW, F2F, OMG), elongated words (e.g., soooo, yaayyy), use of capitalization (e.g., proportion of ALL CAPS words), URLs, etc. Finally, the most important features are those based on the presence of words and phrases in sentiment polarity lexicons with positive/negative scores; examples of such features include number of positive terms, number of negative terms, ratio of the number of positive terms to the number of positive+negative terms, ratio of the number of negative terms to the number of positive+negative terms, sum of all positive scores, sum of all negative scores, sum of all scores, etc.",
"Supervised learning. Traditionally, the above features were fed into classifiers such as Maximum Entropy (MaxEnt) and Support Vector Machines (SVM) with various kernels. However, observation over the SemEval Twitter sentiment task in recent years shows growing interest in, and by now clear dominance of methods based on deep learning. In particular, the best-performing systems at SemEval-2015 and SemEval-2016 used deep convolutional networks BIBREF53 , BIBREF54 . Conversely, kernel machines seem to be less frequently used than in the past, and the use of learning methods other than the ones mentioned above is at this point scarce. All these models are examples of supervised learning as they need labeled training data.",
"Semi-supervised learning. We should note two things about the use of deep neural networks. First they can often do quite well without the need for explicit feature modeling, as they can learn the relevant features in their hidden layers starting from the raw text. Second, they have too many parameters, and thus they require a lot of training data, orders of magnitude more than it is realistic to have manually annotated. A popular way to solve this latter problem is to use self training, a form of semi-supervised learning, where first a system is trained on the available training data only, then this system is applied to make predictions on a large unannotated set of tweets, and finally it is trained for a few more iterations on its own predictions. This works because parts of the network, e.g., with convolution or with LSTMs BIBREF55 , BIBREF54 , BIBREF56 , need to learn something like a language model, i.e., which word is likely to follow which one. Training these parts needs no labels. While these parts can be also pre-trained, it is easier, and often better, to use self training.",
"Distantly-supervised learning. Another way to make use of large unannotated datasets is to rely on distant supervision BIBREF41 . For example, one can annotate tweets for sentiment polarity based on whether they contain a positive or a negative emoticon. This results in noisy labels, which can be used to train a system BIBREF54 , to induce sentiment-specific word embeddings BIBREF57 , sentiment-polarity lexicons BIBREF25 , etc.",
"Unsupervised learning. Fully unsupervised learning is not a popular method for addressing sentiment analysis tasks. Yet, some features used in sentiment analysis have been learned in an unsupervised way, e.g., Brown clusters to generalize over words BIBREF58 . Similarly, word embeddings are typically trained from raw tweets that have no annotation for sentiment (even though there is also work on sentiment-specific word embeddings BIBREF57 , which uses distant supervision)."
],
[
"Despite the wide variety of knowledge sources explored so far in the literature, sentiment polarity lexicons remain the most commonly used resource for the task of sentiment analysis.",
"Until recently, such sentiment polarity lexicons were manually crafted and were thus of small to moderate size, e.g., LIWC BIBREF59 has 2,300 words, the General Inquirer BIBREF60 contains 4,206 words, Bing Liu's lexicon BIBREF22 includes 6,786 words, and MPQA BIBREF14 has about 8,000 words.",
"Early efforts toward building sentiment polarity lexicons automatically yielded lexicons of moderate sizes such as the SentiWordNet BIBREF19 , BIBREF61 . However, recent results have shown that automatically extracted large-scale lexicons (e.g., up to a million words and phrases) offer important performance advantages, as confirmed at shared tasks on Sentiment Analysis on Twitter at SemEval 2013-2016 BIBREF30 , BIBREF26 , BIBREF29 , BIBREF27 . Using such large-scale lexicons was crucial for the performance of the top-ranked systems. Similar observations were made in the related Aspect-Based Sentiment Analysis task at SemEval 2014 BIBREF21 . In both tasks, the winning systems benefitted from building and using massive sentiment polarity lexicons BIBREF25 , BIBREF62 .",
"The two most popular large-scale lexicons were the Hashtag Sentiment Lexicon and the Sentiment140 lexicon, which were developed by the team of NRC Canada for their participation in the SemEval-2013 shared task on sentiment analysis on Twitter. Similar automatically induced lexicons proved useful for other SemEval tasks, e.g., for SemEval-2016 Task 3 on Community Question Answering BIBREF63 , BIBREF30 .",
"The importance of building sentiment polarity lexicons has resulted in a special subtask BIBREF29 at SemEval-2015 (part of Task 4) and an entire task BIBREF37 at SemEval-2016 (namely, Task 7), on predicting the out-of-context sentiment intensity of words and phrases. Yet, we should note though that the utility of using sentiment polarity lexicons for sentiment analysis probably needs to be revisited, as the best system at SemEval-2016 Task 4 could win without using any lexicons BIBREF53 ; it relied on semi-supervised learning using a deep neural network.",
"Various approaches have been proposed in the literature for bootstrapping sentiment polarity lexicons starting from a small set of seeds: positive and negative terms (words and phrases). The dominant approach is that of Turney BIBREF12 , who uses pointwise mutual information and bootstrapping to build a large lexicon and to estimate the semantic orientation of each word in that lexicon. He starts with a small set of seed positive (e.g., excellent) and negative words (e.g., bad), and then uses these words to induce sentiment polarity orientation for new words in a large unannotated set of texts (in his case, product reviews). The idea is that words that co-occur in the same text with positive seed words are likely to be positive, while those that tend to co-occur with negative words are likely to be negative. To quantify this intuition, Turney defines the notion of sentiment orientation (SO) for a term INLINEFORM0 as follows:",
" INLINEFORM0 ",
"where PMI is the pointwise mutual information, INLINEFORM0 and INLINEFORM1 are placeholders standing for any of the seed positive and negative terms, respectively, and INLINEFORM2 is a target word/phrase from the large unannotated set of texts (here tweets).",
"A positive/negative value for INLINEFORM0 indicates positive/negative polarity for the word INLINEFORM1 , and its magnitude shows the corresponding sentiment strength. In turn, INLINEFORM2 , where INLINEFORM3 is the probability to see INLINEFORM4 with any of the seed positive words in the same tweet, INLINEFORM5 is the probability to see INLINEFORM6 in any tweet, and INLINEFORM7 is the probability to see any of the seed positive words in a tweet; INLINEFORM8 is defined similarly.",
"The pointwise mutual information is a notion from information theory: given two random variables INLINEFORM0 and INLINEFORM1 , the mutual information of INLINEFORM2 and INLINEFORM3 is the “amount of information” (in units such as bits) obtained about the random variable INLINEFORM4 , through the random variable INLINEFORM5 BIBREF64 .",
"Let INLINEFORM0 and INLINEFORM1 be two values from the sample space of INLINEFORM2 and INLINEFORM3 , respectively. The pointwise mutual information between INLINEFORM4 and INLINEFORM5 is defined as follows: DISPLAYFORM0 ",
" INLINEFORM0 takes values between INLINEFORM1 , which happens when INLINEFORM2 = 0, and INLINEFORM3 if INLINEFORM4 .",
"In his experiments, Turney BIBREF12 used five positive and five negative words as seeds. His PMI-based approach further served as the basis for the creation of the two above-mentioned large-scale automatic lexicons for sentiment analysis in Twitter for English, initially developed by NRC for their participation in SemEval-2013 BIBREF25 . The Hashtag Sentiment Lexicon uses as seeds hashtags containing 32 positive and 36 negative words, e.g., #happy and #sad. Similarly, the Sentiment140 lexicon uses smileys as seed indicators for positive and negative sentiment, e.g., :), :-), and :)) as positive seeds, and :( and :-( as negative ones.",
"An alternative approach to lexicon induction has been proposed BIBREF65 , which, instead of using PMI, assigns positive/negative labels to the unlabeled tweets (based on the seeds), and then trains an SVM classifier on them, using word INLINEFORM0 -grams as features. These INLINEFORM1 -grams are then used as lexicon entries (words and phrases) with the learned classifier weights as polarity scores. Finally, it has been shown that sizable further performance gains can be obtained by starting with mid-sized seeds, i.e., hundreds of words and phrases BIBREF66 ."
],
[
"Sentiment analysis on Twitter has applications in a number of areas, including political science BIBREF39 , BIBREF40 , BIBREF41 , economics BIBREF42 , BIBREF7 , social science BIBREF43 , and market research BIBREF44 , BIBREF45 . It is used to study company reputation online BIBREF45 , to measure customer satisfaction, to identify detractors and promoters, to forecast market growth BIBREF42 , to predict the future income from newly-released movies, to forecast the outcome of upcoming elections BIBREF41 , BIBREF7 , to study political polarization BIBREF39 , BIBREF9 , etc."
],
[
"We expect the quest for more interesting formulations of the general sentiment analysis task to continue. We see competitions such as those at SemEval as the engine of this innovation, as they not only perform head-to-head comparisons, but also create databases and tools that enable follow-up research for many years afterward.",
"In terms of methods, we believe that deep learning BIBREF55 , BIBREF54 , BIBREF56 , together with semi-supervised and distantly-supervised methods BIBREF67 , BIBREF57 , will be the main focus of future research. We also expect more attention to be paid to linguistic structure and sentiment compositionality BIBREF68 , BIBREF69 . Moreover, we forecast more interest for languages other than English, and for cross-lingual methods BIBREF40 , BIBREF70 , BIBREF71 , which will allow leveraging on the rich resources that are already available for English. Last, but not least, the increase in opinion spam on Twitter will make it important to study astroturfing BIBREF72 and troll detection BIBREF73 , BIBREF74 , BIBREF75 ."
],
[
"Microblog Sentiment Analysis 100590",
"Multi-classifier System for Sentiment Analysis and Opinion Mining 351",
"Sentiment Analysis in Social Media 120",
"Sentiment Analysis of Microblogging Data 110168",
"Sentiment Analysis of Reviews 110169",
"Sentiment Analysis, Basics of 110159",
"Sentiment Quantification of User-Generated Content 110170",
"Social Media Analysis for Monitoring Political Sentiment 110172",
"Twitter Microblog Sentiment Analysis 265",
"User Sentiment and Opinion Analysis 192"
],
[
"For general research on sentiment analysis, we recommend the following surveys: BIBREF76 and BIBREF15 . For sentiment analysis on Twitter, we recommend the overview article on Sentiment Analysis on Twitter about the SemEval task BIBREF28 as well as the task description papers for different editions of the task BIBREF30 , BIBREF26 , BIBREF29 , BIBREF27 ."
]
]
} | {
"question": [
"What is the current SOTA for sentiment analysis on Twitter at the time of writing?",
"What difficulties does sentiment analysis on Twitter have, compared to sentiment analysis in other domains?",
"What are the metrics to evaluate sentiment analysis on Twitter?"
],
"question_id": [
"fa3663567c48c27703e09c42930e51bacfa54905",
"7997b9971f864a504014110a708f215c84815941",
"0d1408744651c3847469c4a005e4a9dccbd89cf1"
],
"nlp_background": [
"five",
"five",
"five"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"irony",
"irony",
"irony"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"deep convolutional networks BIBREF53 , BIBREF54"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Supervised learning. Traditionally, the above features were fed into classifiers such as Maximum Entropy (MaxEnt) and Support Vector Machines (SVM) with various kernels. However, observation over the SemEval Twitter sentiment task in recent years shows growing interest in, and by now clear dominance of methods based on deep learning. In particular, the best-performing systems at SemEval-2015 and SemEval-2016 used deep convolutional networks BIBREF53 , BIBREF54 . Conversely, kernel machines seem to be less frequently used than in the past, and the use of learning methods other than the ones mentioned above is at this point scarce. All these models are examples of supervised learning as they need labeled training data."
],
"highlighted_evidence": [
" In particular, the best-performing systems at SemEval-2015 and SemEval-2016 used deep convolutional networks BIBREF53 , BIBREF54 "
]
}
],
"annotation_id": [
"eaa2871ebfa0e132a84ca316dee33a4e45c9aba9"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Tweets noisy nature, use of creative spelling and punctuation, misspellings, slang, new words, URLs, and genre-specific terminology and abbreviations, short (length limited) text",
"evidence": [
"Pre-processing. Tweets are subject to standard preprocessing steps for text such as tokenization, stemming, lemmatization, stop-word removal, and part-of-speech tagging. Moreover, due to their noisy nature, they are also processed using some Twitter-specific techniques such as substitution/removal of URLs, of user mentions, of hashtags, and of emoticons, spelling correction, elongation normalization, abbreviation lookup, punctuation removal, detection of amplifiers and diminishers, negation scope detection, etc. For this, one typically uses Twitter-specific NLP tools such as part-of-speech and named entity taggers, syntactic parsers, etc. BIBREF47 , BIBREF48 , BIBREF49 .",
"Despite all these opportunities, the rise of social media has also presented new challenges for natural language processing (NLP) applications, which had largely relied on NLP tools tuned for formal text genres such as newswire, and thus were not readily applicable to the informal language and style of social media. That language proved to be quite challenging with its use of creative spelling and punctuation, misspellings, slang, new words, URLs, and genre-specific terminology and abbreviations, e.g., RT for re-tweet and #hashtags. In addition to the genre difference, there is also a difference in length: social media messages are generally short, often length-limited by design as in Twitter, i.e., a sentence or a headline rather than a full document. How to handle such challenges has only recently been the subject of thorough research BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 ."
],
"highlighted_evidence": [
" Moreover, due to their noisy nature, they are also processed using some Twitter-specific techniques such as substitution/removal of URLs, of user mentions, of hashtags, and of emoticons, spelling correction, elongation normalization, abbreviation lookup, punctuation removal, detection of amplifiers and diminishers, negation scope detection, etc.",
"That language proved to be quite challenging with its use of creative spelling and punctuation, misspellings, slang, new words, URLs, and genre-specific terminology and abbreviations, e.g., RT for re-tweet and #hashtags. In addition to the genre difference, there is also a difference in length: social media messages are generally short, often length-limited by design as in Twitter, i.e., a sentence or a headline rather than a full document"
]
}
],
"annotation_id": [
"71753531f52e1fc8ce0c1059d14979d0e723fff8"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"021b8796d9378d1be927a2a74d587f9f64b7082e"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
} | {
"caption": [],
"file": []
} |
1912.01673 | COSTRA 1.0: A Dataset of Complex Sentence Transformations | COSTRA 1.0 is a dataset of Czech complex sentence transformations. The dataset is intended for the study of sentence-level embeddings beyond simple word alternations or standard paraphrasing. ::: The dataset consist of 4,262 unique sentences with average length of 10 words, illustrating 15 types of modifications such as simplification, generalization, or formal and informal language variation. ::: The hope is that with this dataset, we should be able to test semantic properties of sentence embeddings and perhaps even to find some topologically interesting “skeleton” in the sentence embedding space. | {
"section_name": [
"Introduction",
"Background",
"Annotation",
"Annotation ::: First Round: Collecting Ideas",
"Annotation ::: Second Round: Collecting Data ::: Sentence Transformations",
"Annotation ::: Second Round: Collecting Data ::: Seed Data",
"Annotation ::: Second Round: Collecting Data ::: Spell-Checking",
"Dataset Description",
"Dataset Description ::: First Observations",
"Conclusion and Future Work"
],
"paragraphs": [
[
"Vector representations are becoming truly essential in majority of natural language processing tasks. Word embeddings became widely popular with the introduction of word2vec BIBREF0 and GloVe BIBREF1 and their properties have been analyzed in length from various aspects.",
"Studies of word embeddings range from word similarity BIBREF2, BIBREF3, over the ability to capture derivational relations BIBREF4, linear superposition of multiple senses BIBREF5, the ability to predict semantic hierarchies BIBREF6 or POS tags BIBREF7 up to data efficiency BIBREF8.",
"Several studies BIBREF9, BIBREF10, BIBREF11, BIBREF12 show that word vector representations are capable of capturing meaningful syntactic and semantic regularities. These include, for example, male/female relation demonstrated by the pairs “man:woman”, “king:queen” and the country/capital relation (“Russia:Moscow”, “Japan:Tokyo”). These regularities correspond to simple arithmetic operations in the vector space.",
"Sentence embeddings are becoming equally ubiquitous in NLP, with novel representations appearing almost every other week. With an overwhelming number of methods to compute sentence vector representations, the study of their general properties becomes difficult. Furthermore, it is not so clear in which way the embeddings should be evaluated.",
"In an attempt to bring together more traditional representations of sentence meanings and the emerging vector representations, bojar:etal:jnle:representations:2019 introduce a number of aspects or desirable properties of sentence embeddings. One of them is denoted as “relatability”, which highlights the correspondence between meaningful differences between sentences and geometrical relations between their respective embeddings in the highly dimensional continuous vector space. If such a correspondence could be found, we could use geometrical operations in the space to induce meaningful changes in sentences.",
"In this work, we present COSTRA, a new dataset of COmplex Sentence TRAnsformations. In its first version, the dataset is limited to sample sentences in Czech. The goal is to support studies of semantic and syntactic relations between sentences in the continuous space. Our dataset is the prerequisite for one of possible ways of exploring sentence meaning relatability: we envision that the continuous space of sentences induced by an ideal embedding method would exhibit topological similarity to the graph of sentence variations. For instance, one could argue that a subset of sentences could be organized along a linear scale reflecting the formalness of the language used. Another set of sentences could form a partially ordered set of gradually less and less concrete statements. And yet another set, intersecting both of the previous ones in multiple sentences could be partially or linearly ordered according to the strength of the speakers confidence in the claim.",
"Our long term goal is to search for an embedding method which exhibits this behaviour, i.e. that the topological map of the embedding space corresponds to meaningful operations or changes in the set of sentences of a language (or more languages at once). We prefer this behaviour to emerge, as it happened for word vector operations, but regardless if the behaviour is emergent or trained, we need a dataset of sentences illustrating these patterns. If large enough, such a dataset could serve for training. If it will be smaller, it will provide a test set. In either case, these sentences could provide a “skeleton” to the continuous space of sentence embeddings.",
"The paper is structured as follows: related summarizes existing methods of sentence embeddings evaluation and related work. annotation describes our methodology for constructing our dataset. data details the obtained dataset and some first observations. We conclude and provide the link to the dataset in conclusion"
],
[
"As hinted above, there are many methods of converting a sequence of words into a vector in a highly dimensional space. To name a few: BiLSTM with the max-pooling trained for natural language inference BIBREF13, masked language modeling and next sentence prediction using bidirectional Transformer BIBREF14, max-pooling last states of neural machine translation among many languages BIBREF15 or the encoder final state in attentionless neural machine translation BIBREF16.",
"The most common way of evaluating methods of sentence embeddings is extrinsic, using so called `transfer tasks', i.e. comparing embeddings via the performance in downstream tasks such as paraphrasing, entailment, sentence sentiment analysis, natural language inference and other assignments. However, even simple bag-of-words (BOW) approaches achieve often competitive results on such tasks BIBREF17.",
"Adi16 introduce intrinsic evaluation by measuring the ability of models to encode basic linguistic properties of a sentence such as its length, word order, and word occurrences. These so called `probing tasks' are further extended by a depth of the syntactic tree, top constituent or verb tense by DBLP:journals/corr/abs-1805-01070.",
"Both transfer and probing tasks are integrated in SentEval BIBREF18 framework for sentence vector representations. Later, Perone2018 applied SentEval to eleven different encoding methods revealing that there is no consistently well performing method across all tasks. SentEval was further criticized for pitfalls such as comparing different embedding sizes or correlation between tasks BIBREF19, BIBREF20.",
"shi-etal-2016-string show that NMT encoder is able to capture syntactic information about the source sentence. DBLP:journals/corr/BelinkovDDSG17 examine the ability of NMT to learn morphology through POS and morphological tagging.",
"Still, very little is known about semantic properties of sentence embeddings. Interestingly, cifka:bojar:meanings:2018 observe that the better self-attention embeddings serve in NMT, the worse they perform in most of SentEval tasks.",
"zhu-etal-2018-exploring generate automatically sentence variations such as:",
"Original sentence: A rooster pecked grain.",
"Synonym Substitution: A cock pecked grain.",
"Not-Negation: A rooster didn't peck grain.",
"Quantifier-Negation: There was no rooster pecking grain.",
"and compare their triplets by examining distances between their embeddings, i.e. distance between (1) and (2) should be smaller than distances between (1) and (3), (2) and (3), similarly, (3) and (4) should be closer together than (1)–(3) or (1)–(4).",
"In our previous study BIBREF21, we examined the effect of small sentence alternations in sentence vector spaces. We used sentence pairs automatically extracted from datasets for natural language inference BIBREF22, BIBREF23 and observed, that the simple vector difference, familiar from word embeddings, serves reasonably well also in sentence embedding spaces. The examined relations were however very simple: a change of gender, number, addition of an adjective, etc. The structure of the sentence and its wording remained almost identical.",
"We would like to move to more interesting non-trivial sentence comparison, beyond those in zhu-etal-2018-exploring or BaBo2019, such as change of style of a sentence, the introduction of a small modification that drastically changes the meaning of a sentence or reshuffling of words in a sentence that alters its meaning.",
"Unfortunately, such a dataset cannot be generated automatically and it is not available to our best knowledge. We try to start filling this gap with COSTRA 1.0."
],
[
"We acquired the data in two rounds of annotation. In the first one, we were looking for original and uncommon sentence change suggestions. In the second one, we collected sentence alternations using ideas from the first round. The first and second rounds of annotation could be broadly called as collecting ideas and collecting data, respectively."
],
[
"We manually selected 15 newspaper headlines. Eleven annotators were asked to modify each headline up to 20 times and describe the modification with a short name. They were given an example sentence and several of its possible alternations, see tab:firstroundexamples.",
"Unfortunately, these examples turned out to be highly influential on the annotators' decisions and they correspond to almost two thirds of all of modifications gathered in the first round. Other very common transformations include change of a word order or transformation into a interrogative/imperative sentence.",
"Other interesting modification were also proposed such as change into a fairy-tale style, excessive use of diminutives/vulgarisms or dadaism—a swap of roles in the sentence so that the resulting sentence is grammatically correct but nonsensical in our world. Of these suggestions, we selected only the dadaistic swap of roles for the current exploration (see nonsense in Table TABREF7).",
"In total, we collected 984 sentences with 269 described unique changes. We use them as an inspiration for second round of annotation."
],
[
"We selected 15 modifications types to collect COSTRA 1.0. They are presented in annotationinstructions.",
"We asked for two distinct paraphrases of each sentence because we believe that a good sentence embedding should put paraphrases close together in vector space.",
"Several modification types were specifically selected to constitute a thorough test of embeddings. In different meaning, the annotators should create a sentence with some other meaning using the same words as the original sentence. Other transformations which should be difficult for embeddings include minimal change, in which the sentence meaning should be significantly changed by using only very small modification, or nonsense, in which words of the source sentence should be shuffled so that it is grammatically correct, but without any sense."
],
[
"The source sentences for annotations were selected from Czech data of Global Voices BIBREF24 and OpenSubtitles BIBREF25. We used two sources in order to have different styles of seed sentences, both journalistic and common spoken language. We considered only sentences with more than 5 and less than 15 words and we manually selected 150 of them for further annotation. This step was necessary to remove sentences that are:",
"too unreal, out of this world, such as:",
"Jedno fotonový torpédo a je z tebe vesmírná topinka.",
"“One photon torpedo and you're a space toast.”",
"photo captions (i.e. incomplete sentences), e.g.:",
"Zvláštní ekvádorský případ Correa vs. Crudo",
"“Specific Ecuadorian case Correa vs. Crudo”",
"too vague, overly dependent on the context:",
"Běž tam a mluv na ni.",
"“Go there and speak to her.”",
"Many of the intended sentence transformations would be impossible to apply to such sentences and annotators' time would be wasted. Even after such filtering, it was still quite possible that a desired sentence modification could not be achieved for a sentence. For such a case, we gave the annotators the option to enter the keyword IMPOSSIBLE instead of the particular (impossible) modification.",
"This option allowed to explicitly state that no such transformation is possible. At the same time most of the transformations are likely to lead to a large number possible outcomes. As documented in scratching2013, Czech sentence might have hundreds of thousand of paraphrases. To support some minimal exploration of this possible diversity, most of sentences were assigned to several annotators."
],
[
"The annotation is a challenging task and the annotators naturally make mistakes. Unfortunately, a single typo can significantly influence the resulting embedding BIBREF26. After collecting all the sentence variations, we applied the statistical spellchecker and grammar checker Korektor BIBREF27 in order to minimize influence of typos to performance of embedding methods. We manually inspected 519 errors identified by Korektor and fixed 129, which were identified correctly."
],
[
"In the second round, we collected 293 annotations from 12 annotators. After Korektor, there are 4262 unique sentences (including 150 seed sentences) that form the COSTRA 1.0 dataset. Statistics of individual annotators are available in tab:statistics.",
"The time needed to carry out one piece of annotation (i.e. to provide one seed sentence with all 15 transformations) was on average almost 20 minutes but some annotators easily needed even half an hour. Out of the 4262 distinct sentences, only 188 was recorded more than once. In other words, the chance of two annotators producing the same output string is quite low. The most repeated transformations are by far past, future and ban. The least repeated is paraphrase with only single one repeated.",
"multiple-annots documents this in another way. The 293 annotations are split into groups depending on how many annotators saw the same input sentence: 30 annotations were annotated by one person only, 30 annotations by two different persons etc. The last column shows the number of unique outputs obtained in that group. Across all cases, 96.8% of produced strings were unique.",
"In line with instructions, the annotators were using the IMPOSSIBLE option scarcely (95 times, i.e. only 2%). It was also a case of 7 annotators only; the remaining 5 annotators were capable of producing all requested transformations. The top three transformations considered unfeasible were different meaning (using the same set of words), past (esp. for sentences already in the past tense) and simple sentence."
],
[
"We embedded COSTRA sentences with LASER BIBREF15, the method that performed very well in revealing linear relations in BaBo2019. Having browsed a number of 2D visualizations (PCA and t-SNE) of the space, we have to conclude that visually, LASER space does not seem to exhibit any of the desired topological properties discussed above, see fig:pca for one example.",
"The lack of semantic relations in the LASER space is also reflected in vector similarities, summarized in similarities. The minimal change operation substantially changed the meaning of the sentence, and yet the embedding of the transformation lies very closely to the original sentence (average similarity of 0.930). Tense changes and some form of negation or banning also keep the vectors very similar.",
"The lowest average similarity was observed for generalization (0.739) and simplification (0.781), which is not any bad sign. However the fact that paraphrases have much smaller similarity (0.826) than opposite meaning (0.902) documents that the vector space lacks in terms of “relatability”."
],
[
"We presented COSTRA 1.0, a small corpus of complex transformations of Czech sentences.",
"We plan to use this corpus to analyze a wide spectrum sentence embeddings methods to see to what extent the continuous space they induce reflects semantic relations between sentences in our corpus. The very first analysis using LASER embeddings indicates lack of “meaning relatability”, i.e. the ability to move along a trajectory in the space in order to reach desired sentence transformations. Actually, not even paraphrases are found in close neighbourhoods of embedded sentences. More “semantic” sentence embeddings methods are thus to be sought for.",
"The corpus is freely available at the following link:",
"http://hdl.handle.net/11234/1-3123",
"Aside from extending the corpus in Czech and adding other language variants, we are also considering to wrap COSTRA 1.0 into an API such as SentEval, so that it is very easy for researchers to evaluate their sentence embeddings in terms of “relatability”."
]
]
} | {
"question": [
"How many sentence transformations on average are available per unique sentence in dataset?",
"What annotations are available in the dataset?",
"How are possible sentence transformations represented in dataset, as new sentences?",
"What are all 15 types of modifications ilustrated in the dataset?",
"Is this dataset publicly available?",
"Are some baseline models trained on this dataset?",
"Do they do any analysis of of how the modifications changed the starting set of sentences?",
"How do they introduce language variation?",
"Do they use external resources to make modifications to sentences?"
],
"question_id": [
"a3d83c2a1b98060d609e7ff63e00112d36ce2607",
"aeda22ae760de7f5c0212dad048e4984cd613162",
"d5fa26a2b7506733f3fa0973e2fe3fc1bbd1a12d",
"2d536961c6e1aec9f8491e41e383dc0aac700e0a",
"18482658e0756d69e39a77f8fcb5912545a72b9b",
"9d336c4c725e390b6eba8bb8fe148997135ee981",
"016b59daa84269a93ce821070f4f5c1a71752a8a",
"771b373d09e6eb50a74fffbf72d059ad44e73ab0",
"efb52bda7366d2b96545cf927f38de27de3b5b77"
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero",
"zero",
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "27.41 transformation on average of single seed sentence is available in dataset.",
"evidence": [
"In the second round, we collected 293 annotations from 12 annotators. After Korektor, there are 4262 unique sentences (including 150 seed sentences) that form the COSTRA 1.0 dataset. Statistics of individual annotators are available in tab:statistics."
],
"highlighted_evidence": [
"After Korektor, there are 4262 unique sentences (including 150 seed sentences) that form the COSTRA 1.0 dataset."
]
}
],
"annotation_id": [
"0259888535c15dba7d2d5de40c53adb8dee11971"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "For each source sentence, transformation sentences that are transformed according to some criteria (paraphrase, minimal change etc.)",
"evidence": [
"We asked for two distinct paraphrases of each sentence because we believe that a good sentence embedding should put paraphrases close together in vector space.",
"Several modification types were specifically selected to constitute a thorough test of embeddings. In different meaning, the annotators should create a sentence with some other meaning using the same words as the original sentence. Other transformations which should be difficult for embeddings include minimal change, in which the sentence meaning should be significantly changed by using only very small modification, or nonsense, in which words of the source sentence should be shuffled so that it is grammatically correct, but without any sense."
],
"highlighted_evidence": [
"We asked for two distinct paraphrases of each sentence because we believe that a good sentence embedding should put paraphrases close together in vector space.\n\nSeveral modification types were specifically selected to constitute a thorough test of embeddings. In different meaning, the annotators should create a sentence with some other meaning using the same words as the original sentence. Other transformations which should be difficult for embeddings include minimal change, in which the sentence meaning should be significantly changed by using only very small modification, or nonsense, in which words of the source sentence should be shuffled so that it is grammatically correct, but without any sense."
]
}
],
"annotation_id": [
"ccd5497747bba7fc7db7b20a4f6e4b3bdd72e410"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Yes, as new sentences.",
"evidence": [
"In the second round, we collected 293 annotations from 12 annotators. After Korektor, there are 4262 unique sentences (including 150 seed sentences) that form the COSTRA 1.0 dataset. Statistics of individual annotators are available in tab:statistics."
],
"highlighted_evidence": [
"In the second round, we collected 293 annotations from 12 annotators. After Korektor, there are 4262 unique sentences (including 150 seed sentences) that form the COSTRA 1.0 dataset."
]
}
],
"annotation_id": [
"2d057fce8922ab961ff70f7564f6b6d9a96c93e8"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "- paraphrase 1\n- paraphrase 2\n- different meaning\n- opposite meaning\n- nonsense\n- minimal change\n- generalization\n- gossip\n- formal sentence\n- non-standard sentence\n- simple sentence\n- possibility\n- ban\n- future\n- past",
"evidence": [
"We selected 15 modifications types to collect COSTRA 1.0. They are presented in annotationinstructions.",
"FLOAT SELECTED: Table 2: Sentences transformations requested in the second round of annotation with the instructions to the annotators. The annotators were given no examples (with the exception of nonsense) not to be influenced as much as in the first round."
],
"highlighted_evidence": [
"We selected 15 modifications types to collect COSTRA 1.0. They are presented in annotationinstructions.",
"FLOAT SELECTED: Table 2: Sentences transformations requested in the second round of annotation with the instructions to the annotators. The annotators were given no examples (with the exception of nonsense) not to be influenced as much as in the first round."
]
}
],
"annotation_id": [
"656c8738231070b03ee6902ad1d3370b9baf283c"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"The corpus is freely available at the following link:",
"http://hdl.handle.net/11234/1-3123"
],
"highlighted_evidence": [
"The corpus is freely available at the following link:\n\nhttp://hdl.handle.net/11234/1-3123"
]
}
],
"annotation_id": [
"684548df1af075ac0ccea74e6955d72d24f5f553"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"We embedded COSTRA sentences with LASER BIBREF15, the method that performed very well in revealing linear relations in BaBo2019. Having browsed a number of 2D visualizations (PCA and t-SNE) of the space, we have to conclude that visually, LASER space does not seem to exhibit any of the desired topological properties discussed above, see fig:pca for one example."
],
"highlighted_evidence": [
"We embedded COSTRA sentences with LASER BIBREF15, the method that performed very well in revealing linear relations in BaBo2019."
]
}
],
"annotation_id": [
"81c2cdb6b03a7dca137cea7d19912636c332c2b3"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"The lack of semantic relations in the LASER space is also reflected in vector similarities, summarized in similarities. The minimal change operation substantially changed the meaning of the sentence, and yet the embedding of the transformation lies very closely to the original sentence (average similarity of 0.930). Tense changes and some form of negation or banning also keep the vectors very similar."
],
"highlighted_evidence": [
"The minimal change operation substantially changed the meaning of the sentence, and yet the embedding of the transformation lies very closely to the original sentence (average similarity of 0.930)."
]
}
],
"annotation_id": [
"ad359795e78244cb903c71c375f97649e496bea1"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" we were looking for original and uncommon sentence change suggestions"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We acquired the data in two rounds of annotation. In the first one, we were looking for original and uncommon sentence change suggestions. In the second one, we collected sentence alternations using ideas from the first round. The first and second rounds of annotation could be broadly called as collecting ideas and collecting data, respectively."
],
"highlighted_evidence": [
"We acquired the data in two rounds of annotation. In the first one, we were looking for original and uncommon sentence change suggestions."
]
}
],
"annotation_id": [
"384b2c6628c987547369f4c442bf19c759b7631c"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"e7b99e8d5fb7b4623f4c43da91e6ce3cbfa550ff"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Table 1: Examples of transformations given to annotators for the source sentence Several hunters slept on a clearing. The third column shows how many of all the transformation suggestions collected in the first round closely mimic the particular example. The number is approximate as annotators typically call one transformation by several names, e.g. less formally, formality diminished, decrease of formality, not formal expressions, non-formal, less formal, formality decreased, ...",
"Table 2: Sentences transformations requested in the second round of annotation with the instructions to the annotators. The annotators were given no examples (with the exception of nonsense) not to be influenced as much as in the first round.",
"Table 3: Statistics for individual annotators (anonymized as armadillo, . . . , capybara).",
"Table 4: The number of people annotating the same sentence. Most of the sentences have at least three different annotators. Unfortunately, 24 sentences were left without a single annotation.",
"Table 5: Average cosine similarity between the seed sentence and its transformation.",
"Figure 1: 2D visualization using PCA of a single annotation. Best viewed in colors. Every color corresponds to one type of transformation, the large dot represents the source sentence."
],
"file": [
"3-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"5-Table5-1.png",
"5-Figure1-1.png"
]
} |
1909.12231 | Learning to Create Sentence Semantic Relation Graphs for Multi-Document Summarization | Linking facts across documents is a challenging task, as the language used to express the same information in a sentence can vary significantly, which complicates the task of multi-document summarization. Consequently, existing approaches heavily rely on hand-crafted features, which are domain-dependent and hard to craft, or additional annotated data, which is costly to gather. To overcome these limitations, we present a novel method, which makes use of two types of sentence embeddings: universal embeddings, which are trained on a large unrelated corpus, and domain-specific embeddings, which are learned during training. ::: To this end, we develop SemSentSum, a fully data-driven model able to leverage both types of sentence embeddings by building a sentence semantic relation graph. SemSentSum achieves competitive results on two types of summary, consisting of 665 bytes and 100 words. Unlike other state-of-the-art models, neither hand-crafted features nor additional annotated data are necessary, and the method is easily adaptable for other tasks. To our knowledge, we are the first to use multiple sentence embeddings for the task of multi-document summarization. | {
"section_name": [
"Introduction",
"Method",
"Method ::: Sentence Semantic Relation Graph",
"Method ::: Sentence Encoder",
"Method ::: Graph Convolutional Network",
"Method ::: Saliency Estimation",
"Method ::: Training",
"Method ::: Summary Generation Process",
"Experiments ::: Datasets",
"Experiments ::: Evaluation Metric",
"Experiments ::: Model Settings",
"Experiments ::: Summarization Performance",
"Experiments ::: Sentence Semantic Relation Graph Construction",
"Experiments ::: Ablation Study",
"Experiments ::: Results and Discussion",
"Experiments ::: Results and Discussion ::: Summarization Performance",
"Experiments ::: Results and Discussion ::: Sentence Semantic Relation Graph",
"Experiments ::: Results and Discussion ::: Ablation Study",
"Related Work",
"Conclusion",
"Acknowledgments"
],
"paragraphs": [
[
"Today's increasing flood of information on the web creates a need for automated multi-document summarization systems that produce high quality summaries. However, producing summaries in a multi-document setting is difficult, as the language used to display the same information in a sentence can vary significantly, making it difficult for summarization models to capture. Given the complexity of the task and the lack of datasets, most researchers use extractive summarization, where the final summary is composed of existing sentences in the input documents. More specifically, extractive summarization systems output summaries in two steps : via sentence ranking, where an importance score is assigned to each sentence, and via the subsequent sentence selection, where the most appropriate sentence is chosen, by considering 1) their importance and 2) their frequency among all documents. Due to data sparcity, models heavily rely on well-designed features at the word level BIBREF0, BIBREF1, BIBREF2, BIBREF3 or take advantage of other large, manually annotated datasets and then apply transfer learning BIBREF4. Additionally, most of the time, all sentences in the same collection of documents are processed independently and therefore, their relationships are lost.",
"In realistic scenarios, features are hard to craft, gathering additional annotated data is costly, and the large variety in expressing the same fact cannot be handled by the use of word-based features only, as is often the case. In this paper, we address these obstacles by proposing to simultaneously leverage two types of sentence embeddings, namely embeddings pre-trained on a large corpus that capture a variety of meanings and domain-specific embeddings learned during training. The former is typically trained on an unrelated corpus composed of high quality texts, allowing to cover additional contexts for each encountered word and sentence. Hereby, we build on the assumption that sentence embeddings capture both the syntactic and semantic content of sentences. We hypothesize that using two types of sentence embeddings, general and domain-specific, is beneficial for the task of multi-document summarization, as the former captures the most common semantic structures from a large, general corpus, while the latter captures the aspects related to the domain.",
"We present SemSentSum (Figure FIGREF3), a fully data-driven summarization system, which does not depend on hand-crafted features, nor additional data, and is thus domain-independent. It first makes use of general sentence embedding knowledge to build a sentenc semantic relation graph that captures sentence similarities (Section SECREF4). In a second step, it trains genre-specific sentence embeddings related to the domains of the collection of documents, by utilizing a sentence encoder (Section SECREF5). Both representations are afterwards merged, by using a graph convolutional network BIBREF5 (Section SECREF6). Then, it employs a linear layer to project high-level hidden features for individual sentences to salience scores (Section SECREF8). Finally, it greedily produces relevant and non-redundant summaries by using sentence embeddings to detect similarities between candidate sentences and the current summary (Section SECREF11).",
"The main contributions of this work are as follows :",
"We aggregate two types of sentences embeddings using a graph representation. They share different properties and are consequently complementary. The first one is trained on a large unrelated corpus to model general semantics among sentences, whereas the second is domain-specific to the dataset and learned during training. Together, they enable a model to be domain-independent as it can be applied easily on other domains. Moreover, it could be used for other tasks including detecting information cascades, query-focused summarization, keyphrase extraction and information retrieval.",
"We devise a competitive multi-document summarization system, which does not need hand-crafted features nor additional annotated data. Moreover, the results are competitive for 665-byte and 100-word summaries. Usually, models are compared in one of the two settings but not both and thus lack comparability."
],
[
"Let $C$ denote a collection of related documents composed of a set of documents $\\lbrace D_i|i \\in [1,N]\\rbrace $ where $N$ is the number of documents. Moreover, each document $D_i$ consists of a set of sentences $\\lbrace S_{i,j}|j \\in [1,M]\\rbrace $, $M$ being the number of sentences in $D_i$. Given a collection of related documents $C$, our goal is to produce a summary $Sum$ using a subset of these in the input documents ordered in some way, such that $Sum = (S_{i_1,j_1},S_{i_2,j_2},...,S_{i_n,j_m})$.",
"In this section, we describe how SemSentSum estimates the salience score of each sentence and how it selects a subset of these to create the final summary. The architecture of SemSentSum is depicted in Figure FIGREF3.",
"In order to perform sentence selection, we first build our sentence semantic relation graph, where each vertex is a sentence and edges capture the semantic similarity among them. At the same time, each sentence is fed into a recurrent neural network, as a sentence encoder, to generate sentence embeddings using the last hidden states. A single-layer graph convolutional neural network is then applied on top, where the sentence semantic relation graph is the adjacency matrix and the sentence embeddings are the node features. Afterward, a linear layer is used to project high-level hidden features for individual sentences to salience scores, representing how salient a sentence is with respect to the final summary. Finally, based on this, we devise an innovative greedy method that leverages sentence embeddings to detect redundant sentences and select sentences until reaching the summary length limit."
],
[
"We model the semantic relationship among sentences using a graph representation. In this graph, each vertex is a sentence $S_{i,j}$ ($j$'th sentence of document $D_i$) from the collection documents $C$ and an undirected edge between $S_{i_u,j_u}$ and $S_{i_v,j_v}$ indicates their degree of similarity. In order to compute the semantic similarity, we use the model of BIBREF6 trained on the English Wikipedia corpus. In this manner, we incorporate general knowledge (i.e. not domain-specific) that will complete the specialized sentence embeddings obtained during training (see Section SECREF5). We process sentences by their model and compute the cosine similarity between every sentence pair, resulting in a complete graph. However, having a complete graph alone does not allow the model to leverage the semantic structure across sentences significantly, as every sentence pair is connected, and likewise, a sparse graph does not contain enough information to exploit semantic similarities. Furthermore, all edges have a weight above zero, since it is very unlikely that two sentence embeddings are completely orthogonal. To overcome this problem, we introduce an edge-removal-method, where every edge below a certain threshold $t_{sim}^g$ is removed in order to emphasize high sentence similarity. Nonetheless, $t_{sim}^g$ should not be too large, as we otherwise found the model to be prone to overfitting. After removing edges below $t_{sim}^g$, our sentence semantic relation graph is used as the adjacency matrix $A$. The impact of $t_{sim}^g$ with different values is shown in Section SECREF26.",
"Based on our aforementioned hypothesis that a combination of general and genre-specific sentence embeddings is beneficial for the task of multi-document summarization, we further incorporate general sentence embeddings, pre-trained on Wikipedia entries, into edges between sentences. Additionally, we compute specialised sentence embeddings, which are related to the domains of the documents (see Section SECREF35).",
"Note that 1) the pre-trained sentence embeddings are only used to compute the weights of the edges and are not used by the summarization model (as others are produced by the sentence encoder) and 2) the edge weights are static and do not change during training."
],
[
"Given a list of documents $C$, we encode each document's sentence $S_{i,j}$, where each has at most $L$ words $(w_{i,j,1}, w_{i,j,2}, ..., w_{i,j,L})$. In our experiments, all words are kept and converted into word embeddings, which are then fed to the sentence encoder in order to compute specialized sentence embeddings $S^{\\prime }_{i,j}$. We employ a single-layer forward recurrent neural network, using Long Short-Term Memory (LSTM) of BIBREF7 as sentence encoder, where the sentence embeddings are extracted from the last hidden states. We then concatenate all sentence embeddings into a matrix $X$ which constitutes the input node features that will be used by the graph convolutional network."
],
[
"After having computed all sentence embeddings and the sentence semantic relation graph, we apply a single-layer Graph Convolutional Network (GCN) from BIBREF5, in order to capture high-level hidden features for each sentence, encapsulating sentence information as well as the graph structure.",
"We believe that our sentence semantic relation graph contains information not present in the data (via universal embeddings) and thus, we leverage this information by running a graph convolution on the first order neighborhood.",
"The GCN model takes as input the node features matrix $X$ and a squared adjacency matrix $A$. The former contains all sentence embeddings of the collection of documents, while the latter is our underlying sentence semantic relation graph. It outputs hidden representations for each node that encode both local graph structure and nodes's features. In order to take into account the sentences themselves during the information propagation, we add self-connections (i.e. the identity matrix) to $A$ such that $\\tilde{A} = A + I$.",
"Subsequently, we obtain our sentence hidden features by using Equation DISPLAY_FORM7.",
"where $W_i$ is the weight matrix of the $i$'th graph convolution layer and $b_i$ the bias vector. We choose the Exponential Linear Unit (ELU) activation function from BIBREF8 due to its ability to handle the vanishing gradient problem, by pushing the mean unit activations close to zero and consequently facilitating the backpropagation. By using only one hidden layer, as we only have one input-to-hidden layer and one hidden-to-output layer, we limit the information propagation to the first order neighborhood."
],
[
"We use a simple linear layer to estimate a salience score for each sentence and then normalize the scores via softmax and obtain our estimated salience score $S^s_{i,j}$."
],
[
"Our model SemSentSum is trained in an end-to-end manner and minimizes the cross-entropy loss of Equation DISPLAY_FORM10 between the salience score prediction and the ROUGE-1 $F_1$ score for each sentence.",
"$F_1(S)$ is computed as the ROUGE-1 $F_1$ score, unlike the common practice in the area of single and multi-document summarization as recall favors longer sentences whereas $F_1$ prevents this tendency. The scores are normalized via softmax."
],
[
"While our model SemSentSum provides estimated saliency scores, we use a greedy strategy to construct an informative and non-redundant summary $Sum$. We first discard sentences having less than 9 words, as in BIBREF9, and then sort them in descending order of their estimated salience scores. We iteratively dequeue the sentence having the highest score and append it to the current summary $Sum$ if it is non-redundant with respect to the current content of $Sum$. We iterate until reaching the summary length limit.",
"To determine the similarity of a candidate sentence with the current summary, a sentence is considered as dissimilar if and only if the cosine similarity between its sentence embeddings and the embeddings of the current summary is below a certain threshold $t_{sim}^s$. We use the pre-trained model of BIBREF6 to compute sentence as well as summary embeddings, similarly to the sentence semantic relation graph construction. Our approach is novel, since it focuses on the semantic sentence structures and captures similarity between sentence meanings, instead of focusing on word similarities only, like previous TF-IDF approaches ( BIBREF0, BIBREF1, BIBREF3, BIBREF4)."
],
[
"We conduct experiments on the most commonly used datasets for multi-document summarization from the Document Understanding Conferences (DUC). We use DUC 2001, 2002, 2003 and 2004 as the tasks of generic multi-document summarization, because they have been carried out during these years. We use DUC 2001, 2002, 2003 and 2004 for generic multi-document summarization, where DUC 2001/2002 are used for training, DUC 2003 for validation and finally, DUC 2004 for testing, following the common practice."
],
[
"For the evaluation, we use ROUGE BIBREF10 with the official parameters of the DUC tasks and also truncate the summaries to 100 words for DUC 2001/2002/2003 and to 665 bytes for DUC 2004. Notably, we take ROUGE-1 and ROUGE-2 recall scores as the main metrics for comparison between produced summaries and golden ones as proposed by BIBREF11. The goal of the ROUGE-N metric is to compute the ratio of the number of N-grams from the generated summary matching these of the human reference summaries."
],
[
"To define the edge weights of our sentence semantic relation graph, we employ the 600-dimensional pre-trained unigram model of BIBREF6, using English Wikipedia as source corpus. We keep only edges having a weight larger than $t_{sim}^g = 0.5$ (tuned on the validation set). For word embeddings, the 300-dimensional pre-trained GloVe embeddings BIBREF12 are used and fixed during training. The output dimension of the sentence embeddings produced by the sentence encoder is the same as that of the word embeddings, i.e. 300. For the graph convolutional network, the number of hidden units is 128 and the size of the generated hidden feature vectors is also 300. We use a batch size of 1, a learning rate of $0.0075$ using Adam optimizer BIBREF13 with $\\beta _1=0.9, \\beta _2=0.999$ and $\\epsilon =10^{-8}$. In order to make SemSentSum generalize better, we use dropout BIBREF14 of $0.2$, batch normalization BIBREF15, clip the gradient norm at $1.0$ if higher, add L2-norm regularizer with a regularization factor of $10^{-12}$ and train using early stopping with a patience of 10 iterations. Finally, the similarity threshold $t_{sim}^s$ in the summary generation process is $0.8$ (tuned on the validation set)."
],
[
"We train our model SemSentSum on DUC 2001/2002, tune it on DUC 2003 and assess the performance on DUC 2004. In order to fairly compare SemSentSum with other models available in the literature, experiments are conducted with summaries truncated to 665 bytes (official summary length in the DUC competition), but also with summaries with a length constraint of 100 words. To the best of our knowledge, we are the first to conduct experiments on both summary lengths and compare our model with other systems producing either 100 words or 665 bytes summaries."
],
[
"We investigate different methods to build our sentence semantic relation graph and vary the value of $t_{sim}^g$ from $0.0$ to $0.75$ to study the impact of the threshold cut-off. Among these are :",
"Cosine : Using cosine similarity ;",
"Tf-idf : Considering a node as the query and another as document. The weight corresponds to the cosine similarity between the query and the document ;",
"TextRank BIBREF16 : A weighted graph is created where nodes are sentences and edges defined by a similarity measure based on word overlap. Afterward, an algorithm similar to PageRank BIBREF17 is used to compute sentence importance and refined edge weights ;",
"LexRank BIBREF9 : An unsupervised multi-document summarizer based on the concept of eigenvector centrality in a graph of sentences to set up the edge weights ;",
"Approximate Discourse Graph (ADG) BIBREF2 : Approximation of a discourse graph where nodes are sentences and edges $(S_u,S_v)$ indicates sentence $S_v$ can be placed after $S_u$ in a coherent summary ;",
"Personalized ADG (PADG) BIBREF3 : Normalized version of ADG where sentence nodes are normalized over all edges."
],
[
"In order to quantify the contribution of the different components of SemSentSum, we try variations of our model by removing different modules one at a time. Our two main elements are the sentence encoder (Sent) and the graph convolutional neural network (GCN). When we omit Sent, we substitute it with the pre-trained sentence embeddings used to build our sentence semantic relation graph."
],
[
"Three dimensions are used to evaluate our model SemSentSum : 1) the summarization performance, to assess its capability 2) the impact of the sentence semantic relation graph generation using various methods and different thresholds $t_{sim}^g$ 3) an ablation study to analyze the importance of each component of SemSentSum."
],
[
"We compare the results of SemSentSum for both settings : 665 bytes and 100 words summaries. We only include models using the same parameters to compute the ROUGE-1/ROUGE-2 score and recall as metrics.",
"The results for 665 bytes summaries are reported in Table TABREF28. We compare SemSentSum with three types of model relying on either 1) sentence or document embeddings 2) various hand-crafted features or 3) additional data.",
"For the first category, we significantly outperform MMR BIBREF18, PV-DBOW+BS BIBREF19 and PG-MMR BIBREF20. Although their methods are based on embeddings to represent the meaning, it shows that using only various distance metrics or encoder-decoder architecture on these is not efficient for the task of multi-document summarization (as also shown in the Ablation Study). We hypothesize that SemSentSum performs better by leveraging pre-trained sentence embeddings and hence lowering the effects of data scarcity.",
"Systems based on hand-crafted features include a widely-used learning-based summarization method, built on support vector regression SVR BIBREF21 ; a graph-based method based on approximating discourse graph G-Flow BIBREF2 ; Peer 65 which is the best peer systems participating in DUC evaluations ; and the recursive neural network R2N2 of BIBREF1 that learns automatically combinations of hand-crafted features. As can be seen, among these models completely dependent on hand-crafted features, SemSentSum achieves highest performance on both ROUGE scores. This denotes that using different linguistic and word-based features might not be enough to capture the semantic structures, in addition to being cumbersome to craft.",
"The last type of model is shown in TCSum BIBREF4 and uses transfer learning from a text classifier model, based on a domain-related dataset of $30\\,000$ documents from New York Times (sharing the same topics of the DUC datasets). In terms of ROUGE-1, SemSentSum significantly outperforms TCSum and performs similarly on ROUGE-2 score. This demonstrates that collecting more manually annotated data and training two models is unnecessary, in addition to being difficult to use in other domains, whereas SemSentSum is fully data driven, domain-independent and usable in realistic scenarios.",
"Table TABREF32 depicts models producing 100 words summaries, all depending on hand-crafted features. We use as baselines FreqSum BIBREF22 ; TsSum BIBREF23 ; traditional graph-based approaches such as Cont. LexRank BIBREF9 ; Centroid BIBREF24 ; CLASSY04 BIBREF25 ; its improved version CLASSY11 BIBREF26 and the greedy model GreedyKL BIBREF27. All of these models are significantly underperforming compared to SemSentSum. In addition, we include state-of-the-art models : RegSum BIBREF0 and GCN+PADG BIBREF3. We outperform both in terms of ROUGE-1. For ROUGE-2 scores we achieve better results than GCN+PADG but without any use of domain-specific hand-crafted features and a much smaller and simpler model. Finally, RegSum achieves a similar ROUGE-2 score but computes sentence saliences based on word scores, incorporating a rich set of word-level and domain-specific features. Nonetheless, our model is competitive and does not depend on hand-crafted features due to its full data-driven nature and thus, it is not limited to a single domain.",
"Consequently, the experiments show that achieving good performance for multi-document summarization without hand-crafted features or additional data is clearly feasible and SemSentSum produces competitive results without depending on these, is domain independent, fast to train and thus usable in real scenarios."
],
[
"Table TABREF34 shows the results of different methods to create the sentence semantic relation graph with various thresholds $t_{sim}^g$ for 665 bytes summaries (we obtain similar results for 100 words). A first observation is that using cosine similarity with sentence embeddings significantly outperforms all other methods for ROUGE-1 and ROUGE-2 scores, mainly because it relies on the semantic of sentences instead of their individual words. A second is that different methods evolve similarly : PADG, Textrank, Tf-idf behave similarly to an U-shaped curve for both ROUGE scores while Cosine is the only one having an inverted U-shaped curve. The reason for this behavior is a consequence of its distribution being similar to a normal distribution because it relies on the semantic instead of words, while the others are more skewed towards zero. This confirms our hypothesis that 1) having a complete graph does not allow the model to leverage much the semantic 2) a sparse graph might not contain enough information to exploit similarities. Finally, Lexrank and ADG have different trends between both ROUGE scores."
],
[
"We quantify the contribution of each module of SemSentSum in Table TABREF36 for 665 bytes summaries (we obtain similar results for 100 words). Removing the sentence encoder produces slightly lower results. This shows that the sentence semantic relation graph captures semantic attributes well, while the fine-tuned sentence embeddings obtained via the encoder help boost the performance, making these methods complementary. By disabling only the graph convolutional layer, a drastic drop in terms of performance is observed, which emphasizes that the relationship among sentences is indeed important and not present in the data itself. Therefore, our sentence semantic relation graph is able to capture sentence similarities by analyzing the semantic structures. Interestingly, if we remove the sentence encoder in addition to the graph convolutional layer, similar results are achieved. This confirms that alone, the sentence encoder is not able to compute an efficient representation of sentences for the task of multi-document summarization, probably due to the poor size of the DUC datasets. Finally, we can observe that the use of sentence embeddings only results in similar performance to the baselines, which rely on sentence or document embeddings BIBREF18, BIBREF19."
],
[
"The idea of using multiple embeddings has been employed at the word level. BIBREF28 use an attention mechanism to combine the embeddings for each word for the task of natural language inference. BIBREF29, BIBREF30 concatenate the embeddings of each word into a vector before feeding a neural network for the tasks of aspect extraction and sentiment analysis. To our knowledge, we are the first to combine multiple types of sentence embeddings.",
"Extractive multi-document summarization has been addressed by a large range of approaches. Several of them employ graph-based methods. BIBREF31 introduced a cross-document structure theory, as a basis for multi-document summarization. BIBREF9 proposed LexRank, an unsupervised multi-document summarizer based on the concept of eigenvector centrality in a graph of sentences. Other works exploit shallow or deep features from the graph's topology BIBREF32, BIBREF33. BIBREF34 pairs graph-based methods (e.g. random walk) with clustering. BIBREF35 improved results by using a reinforced random walk model to rank sentences and keep non-redundant ones. The system by BIBREF2 does sentence selection, while balancing coherence and salience and by building a graph that approximates discourse relations across sentences BIBREF36.",
"Besides graph-based methods, other viable approaches include Maximum Marginal Relevance BIBREF37, which uses a greedy approach to select sentences and considers the tradeoff between relevance and redundancy ; support vector regression BIBREF21 ; conditional random field BIBREF38 ; or hidden markov model BIBREF25. Yet other approaches rely on n-grams regression as in BIBREF39. More recently, BIBREF1 built a recursive neural network, which tries to automatically detect combination of hand-crafted features. BIBREF4 employ a neural model for text classification on a large manually annotated dataset and apply transfer learning for multi-document summarization afterward.",
"The work most closely related to ours is BIBREF3. They create a normalized version of the approximate discourse graph BIBREF2, based on hand-crafted features, where sentence nodes are normalized over all the incoming edges. They then employ a deep neural network, composed of a sentence encoder, three graph convolutional layers, one document encoder and an attention mechanism. Afterward, they greedily select sentences using TF-IDF similarity to detect redundant sentences. Our model differs in four ways : 1) we build our sentence semantic relation graph by using pre-trained sentence embeddings with cosine similarity, where neither heavy preprocessing, nor hand-crafted features are necessary. Thus, our model is fully data-driven and domain-independent unlike other systems. In addition, the sentence semantic relation graph could be used for other tasks than multi-document summarization, such as detecting information cascades, query-focused summarization, keyphrase extraction or information retrieval, as it is not composed of hand-crafted features. 2) SemSentSum is much smaller and consequently has fewer parameters as it only uses a sentence encoder and a single convolutional layer. 3) The loss function is based on ROUGE-1 $F_1$ score instead of recall to prevent the tendency of choosing longer sentences. 4) Our method for summary generation is also different and novel as we leverage sentence embeddings to compute the similarity between a candidate sentence and the current summary instead of TF-IDF based approaches."
],
[
"In this work, we propose a method to combine two types of sentence embeddings : 1) universal embeddings, pre-trained on a large corpus such as Wikipedia and incorporating general semantic structures across sentences and 2) domain-specific embeddings, learned during training. We merge them together by using a graph convolutional network that eliminates the need of hand-crafted features or additional annotated data.",
"We introduce a fully data-driven model SemSentSum that achieves competitive results for multi-document summarization on both kind of summary length (665 bytes and 100 words summaries), without requiring hand-crafted features or additional annotated data.",
"As SemSentSum is domain-independent, we believe that our sentence semantic relation graph and model can be used for other tasks including detecting information cascades, query-focused summarization, keyphrase extraction and information retrieval. In addition, we plan to leave the weights of the sentence semantic relation graph dynamic during training, and to integrate an attention mechanism directly into the graph."
],
[
"We thank Michaela Benk for proofreading and helpful advice."
]
]
} | {
"question": [
"How big is dataset domain-specific embedding are trained on?",
"How big is unrelated corpus universal embedding is traned on?",
"How better are state-of-the-art results than this model? "
],
"question_id": [
"1a7d28c25bb7e7202230e1b70a885a46dac8a384",
"6bc45d4f908672945192390642da5a2760971c40",
"48cc41c372d44b69a477998be449f8b81384786b"
],
"nlp_background": [
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"025ca173b2d4e1a27de8d358c9d01dda2cab2f51"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"e068e693e87c45d33924bd45b2c68ad63ad56276"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"we achieve better results than GCN+PADG but without any use of domain-specific hand-crafted features",
" RegSum achieves a similar ROUGE-2 score"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Table TABREF32 depicts models producing 100 words summaries, all depending on hand-crafted features. We use as baselines FreqSum BIBREF22 ; TsSum BIBREF23 ; traditional graph-based approaches such as Cont. LexRank BIBREF9 ; Centroid BIBREF24 ; CLASSY04 BIBREF25 ; its improved version CLASSY11 BIBREF26 and the greedy model GreedyKL BIBREF27. All of these models are significantly underperforming compared to SemSentSum. In addition, we include state-of-the-art models : RegSum BIBREF0 and GCN+PADG BIBREF3. We outperform both in terms of ROUGE-1. For ROUGE-2 scores we achieve better results than GCN+PADG but without any use of domain-specific hand-crafted features and a much smaller and simpler model. Finally, RegSum achieves a similar ROUGE-2 score but computes sentence saliences based on word scores, incorporating a rich set of word-level and domain-specific features. Nonetheless, our model is competitive and does not depend on hand-crafted features due to its full data-driven nature and thus, it is not limited to a single domain."
],
"highlighted_evidence": [
"In addition, we include state-of-the-art models : RegSum BIBREF0 and GCN+PADG BIBREF3. We outperform both in terms of ROUGE-1. For ROUGE-2 scores we achieve better results than GCN+PADG but without any use of domain-specific hand-crafted features and a much smaller and simpler model. Finally, RegSum achieves a similar ROUGE-2 score but computes sentence saliences based on word scores, incorporating a rich set of word-level and domain-specific features."
]
}
],
"annotation_id": [
"f9bb0a2837b849666eede350d6d13778dcc1995d"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Figure 1: Overview of SemSentSum. This illustration includes two documents in the collection, where the first one has three sentences and the second two. A sentence semantic relation graph is firstly built and each sentence node is processed by an encoder network at the same time. Thereafter, a single-layer graph convolutional network is applied on top and produces high-level hidden features for individual sentences. Then, salience scores are estimated using a linear layer and used to produce the final summary.",
"Table 1: Comparison of various models using ROUGE1/ROUGE-2 on DUC 2004 with 665 bytes summaries.",
"Table 2: Comparison of various models using ROUGE1/2 on DUC 2004 with 100 words summaries.",
"Table 3: ROUGE-1/2 for various methods to build the sentence semantic relation graph. A score significantly different (according to a Welch Two Sample t-test, p = 0.001) than cosine similarity (tgsim = 0.5) is denoted by ∗.",
"Table 4: Ablation test. Sent is the sentence encoder and GCN the graph convolutional network. According to a Welch Two Sample t-test (p = 0.001), a score significantly different than SemSentSum is denoted by ∗."
],
"file": [
"3-Figure1-1.png",
"6-Table1-1.png",
"6-Table2-1.png",
"8-Table3-1.png",
"8-Table4-1.png"
]
} |
1706.08032 | A Deep Neural Architecture for Sentence-level Sentiment Classification in Twitter Social Networking | This paper introduces a novel deep learning framework including a lexicon-based approach for sentence-level prediction of sentiment label distribution. We propose to first apply semantic rules and then use a Deep Convolutional Neural Network (DeepCNN) for character-level embeddings in order to increase information for word-level embedding. After that, a Bidirectional Long Short-Term Memory Network (Bi-LSTM) produces a sentence-wide feature representation from the word-level embedding. We evaluate our approach on three Twitter sentiment classification datasets. Experimental results show that our model can improve the classification accuracy of sentence-level sentiment analysis in Twitter social networking. | {
"section_name": [
"Introduction",
"Basic idea",
"Data Preparation",
"Preprocessing",
"Semantic Rules (SR)",
"Representation Levels",
"Deep Learning Module",
"Regularization",
" Experimental setups",
"Experimental results",
"Analysis",
"Conclusions"
],
"paragraphs": [
[
"Twitter sentiment classification have intensively researched in recent years BIBREF0 BIBREF1 . Different approaches were developed for Twitter sentiment classification by using machine learning such as Support Vector Machine (SVM) with rule-based features BIBREF2 and the combination of SVMs and Naive Bayes (NB) BIBREF3 . In addition, hybrid approaches combining lexicon-based and machine learning methods also achieved high performance described in BIBREF4 . However, a problem of traditional machine learning is how to define a feature extractor for a specific domain in order to extract important features.",
"Deep learning models are different from traditional machine learning methods in that a deep learning model does not depend on feature extractors because features are extracted during training progress. The use of deep learning methods becomes to achieve remarkable results for sentiment analysis BIBREF5 BIBREF6 BIBREF7 . Some researchers used Convolutional Neural Network (CNN) for sentiment classification. CNN models have been shown to be effective for NLP. For example, BIBREF6 proposed various kinds of CNN to learn sentiment-bearing sentence vectors, BIBREF5 adopted two CNNs in character-level to sentence-level representation for sentiment analysis. BIBREF7 constructs experiments on a character-level CNN for several large-scale datasets. In addition, Long Short-Term Memory (LSTM) is another state-of-the-art semantic composition model for sentiment classification with many variants described in BIBREF8 . The studies reveal that using a CNN is useful in extracting information and finding feature detectors from texts. In addition, a LSTM can be good in maintaining word order and the context of words. However, in some important aspects, the use of CNN or LSTM separately may not capture enough information.",
"Inspired by the models above, the goal of this research is using a Deep Convolutional Neural Network (DeepCNN) to exploit the information of characters of words in order to support word-level embedding. A Bi-LSTM produces a sentence-wide feature representation based on these embeddings. The Bi-LSTM is a version of BIBREF9 with Full Gradient described in BIBREF10 . In addition, the rules-based approach also effects classification accuracy by focusing on important sub-sentences expressing the main sentiment of a tweet while removing unnecessary parts of a tweet. The paper makes the following contributions:",
"The organization of the present paper is as follows: In section 2, we describe the model architecture which introduces the structure of the model. We explain the basic idea of model and the way of constructing the model. Section 3 show results and analysis and section 4 summarize this paper."
],
[
"Our proposed model consists of a deep learning classifier and a tweet processor. The deep learning classifier is a combination of DeepCNN and Bi-LSTM. The tweet processor standardizes tweets and then applies semantic rules on datasets. We construct a framework that treats the deep learning classifier and the tweet processor as two distinct components. We believe that standardizing data is an important step to achieve high accuracy. To formulate our problem in increasing the accuracy of the classifier, we illustrate our model in Figure. FIGREF4 as follows:",
"Tweets are firstly considered via a processor based on preprocessing steps BIBREF0 and the semantic rules-based method BIBREF11 in order to standardize tweets and capture only important information containing the main sentiment of a tweet.",
"We use DeepCNN with Wide convolution for character-level embeddings. A wide convolution can learn to recognize specific n-grams at every position in a word that allows features to be extracted independently of these positions in the word. These features maintain the order and relative positions of characters. A DeepCNN is constructed by two wide convolution layers and the need of multiple wide convolution layers is widely accepted that a model constructing by multiple processing layers have the ability to learn representations of data with higher levels of abstraction BIBREF12 . Therefore, we use DeepCNN for character-level embeddings to support morphological and shape information for a word. The DeepCNN produces INLINEFORM0 global fixed-sized feature vectors for INLINEFORM1 words.",
"A combination of the global fixed-size feature vectors and word-level embedding is fed into Bi-LSTM. The Bi-LSTM produces a sentence-level representation by maintaining the order of words.",
"Our work is philosophically similar to BIBREF5 . However, our model is distinguished with their approaches in two aspects:",
"Using DeepCNN with two wide convolution layers to increase representation with multiple levels of abstraction.",
"Integrating global character fixed-sized feature vectors with word-level embedding to extract a sentence-wide feature set via Bi-LSTM. This deals with three main problems: (i) Sentences have any different size; (ii) The semantic and the syntactic of words in a sentence are captured in order to increase information for a word; (iii) Important information of characters that can appear at any position in a word are extracted.",
"In sub-section B, we introduce various kinds of dataset. The modules of our model are constructed in other sub-sections."
],
[
"Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .",
"Sanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.",
"Health Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 ."
],
[
"We firstly take unique properties of Twitter in order to reduce the feature space such as Username, Usage of links, None, URLs and Repeated Letters. We then process retweets, stop words, links, URLs, mentions, punctuation and accentuation. For emoticons, BIBREF0 revealed that the training process makes the use of emoticons as noisy labels and they stripped the emoticons out from their training dataset because BIBREF0 believed that if we consider the emoticons, there is a negative impact on the accuracies of classifiers. In addition, removing emoticons makes the classifiers learns from other features (e.g. unigrams and bi-grams) presented in tweets and the classifiers only use these non-emoticon features to predict the sentiment of tweets. However, there is a problem is that if the test set contains emoticons, they do not influence the classifiers because emoticon features do not contain in its training data. This is a limitation of BIBREF0 , because the emoticon features would be useful when classifying test data. Therefore, we keep emoticon features in the datasets because deep learning models can capture more information from emoticon features for increasing classification accuracy."
],
[
"In Twitter social networking, people express their opinions containing sub-sentences. These sub-sentences using specific PoS particles (Conjunction and Conjunctive adverbs), like \"but, while, however, despite, however\" have different polarities. However, the overall sentiment of tweets often focus on certain sub-sentences. For example:",
"@lonedog bwahahah...you are amazing! However, it was quite the letdown.",
"@kirstiealley my dentist is great but she's expensive...=(",
"In two tweets above, the overall sentiment is negative. However, the main sentiment is only in the sub-sentences following but and however. This inspires a processing step to remove unessential parts in a tweet. Rule-based approach can assists these problems in handling negation and dealing with specific PoS particles led to effectively affect the final output of classification BIBREF11 BIBREF16 . BIBREF11 summarized a full presentation of their semantic rules approach and devised ten semantic rules in their hybrid approach based on the presentation of BIBREF16 . We use five rules in the semantic rules set because other five rules are only used to compute polarity of words after POS tagging or Parsing steps. We follow the same naming convention for rules utilized by BIBREF11 to represent the rules utilized in our proposed method. The rules utilized in the proposed method are displayed in Table TABREF15 in which is included examples from STS Corpus and output after using the rules. Table TABREF16 illustrates the number of processed sentences on each dataset."
],
[
"To construct embedding inputs for our model, we use a fixed-sized word vocabulary INLINEFORM0 and a fixed-sized character vocabulary INLINEFORM1 . Given a word INLINEFORM2 is composed from characters INLINEFORM3 , the character-level embeddings are encoded by column vectors INLINEFORM4 in the embedding matrix INLINEFORM5 , where INLINEFORM6 is the size of the character vocabulary. For word-level embedding INLINEFORM7 , we use a pre-trained word-level embedding with dimension 200 or 300. A pre-trained word-level embedding can capture the syntactic and semantic information of words BIBREF17 . We build every word INLINEFORM8 into an embedding INLINEFORM9 which is constructed by two sub-vectors: the word-level embedding INLINEFORM10 and the character fixed-size feature vector INLINEFORM11 of INLINEFORM12 where INLINEFORM13 is the length of the filter of wide convolutions. We have INLINEFORM14 character fixed-size feature vectors corresponding to word-level embedding in a sentence."
],
[
"DeepCNN in the deep learning module is illustrated in Figure. FIGREF22 . The DeepCNN has two wide convolution layers. The first layer extract local features around each character windows of the given word and using a max pooling over character windows to produce a global fixed-sized feature vector for the word. The second layer retrieves important context characters and transforms the representation at previous level into a representation at higher abstract level. We have INLINEFORM0 global character fixed-sized feature vectors for INLINEFORM1 words.",
"In the next step of Figure. FIGREF4 , we construct the vector INLINEFORM0 by concatenating the word-level embedding with the global character fixed-size feature vectors. The input of Bi-LSTM is a sequence of embeddings INLINEFORM1 . The use of the global character fixed-size feature vectors increases the relationship of words in the word-level embedding. The purpose of this Bi-LSTM is to capture the context of words in a sentence and maintain the order of words toward to extract sentence-level representation. The top of the model is a softmax function to predict sentiment label. We describe in detail the kinds of CNN and LSTM that we use in next sub-part 1 and 2.",
"The one-dimensional convolution called time-delay neural net has a filter vector INLINEFORM0 and take the dot product of filter INLINEFORM1 with each m-grams in the sequence of characters INLINEFORM2 of a word in order to obtain a sequence INLINEFORM3 : DISPLAYFORM0 ",
"Based on Equation 1, we have two types of convolutions that depend on the range of the index INLINEFORM0 . The narrow type of convolution requires that INLINEFORM1 and produce a sequence INLINEFORM2 . The wide type of convolution does not require on INLINEFORM3 or INLINEFORM4 and produce a sequence INLINEFORM5 . Out-of-range input values INLINEFORM6 where INLINEFORM7 or INLINEFORM8 are taken to be zero. We use wide convolution for our model.",
"Given a word INLINEFORM0 composed of INLINEFORM1 characters INLINEFORM2 , we take a character embedding INLINEFORM3 for each character INLINEFORM4 and construct a character matrix INLINEFORM5 as following Equation. 2: DISPLAYFORM0 ",
"The values of the embeddings INLINEFORM0 are parameters that are optimized during training. The trained weights in the filter INLINEFORM1 correspond to a feature detector which learns to recognize a specific class of n-grams. The n-grams have size INLINEFORM2 . The use of a wide convolution has some advantages more than a narrow convolution because a wide convolution ensures that all weights of filter reach the whole characters of a word at the margins. The resulting matrix has dimension INLINEFORM3 .",
"Long Short-Term Memory networks usually called LSTMs are a improved version of RNN. The core idea behind LSTMs is the cell state which can maintain its state over time, and non-linear gating units which regulate the information flow into and out of the cell. The LSTM architecture that we used in our proposed model is described in BIBREF9 . A single LSTM memory cell is implemented by the following composite function: DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 is the logistic sigmoid function, INLINEFORM1 and INLINEFORM2 are the input gate, forget gate, output gate, cell and cell input activation vectors respectively. All of them have a same size as the hidden vector INLINEFORM3 . INLINEFORM4 is the hidden-input gate matrix, INLINEFORM5 is the input-output gate matrix. The bias terms which are added to INLINEFORM6 and INLINEFORM7 have been omitted for clarity. In addition, we also use the full gradient for calculating with full backpropagation through time (BPTT) described in BIBREF10 . A LSTM gradients using finite differences could be checked and making practical implementations more reliable."
],
[
"For regularization, we use a constraint on INLINEFORM0 of the weight vectors BIBREF18 ."
],
[
"For the Stanford Twitter Sentiment Corpus, we use the number of samples as BIBREF5 . The training data is selected 80K tweets for a training data and 16K tweets for the development set randomly from the training data of BIBREF0 . We conduct a binary prediction for STS Corpus.",
"For Sander dataset, we use standard 10-fold cross validation as BIBREF14 . We construct the development set by selecting 10% randomly from 9-fold training data.",
"In Health Care Reform Corpus, we also select 10% randomly for the development set in a training set and construct as BIBREF14 for comparison. We describe the summary of datasets in Table III.",
"for all datasets, the filter window size ( INLINEFORM0 ) is 7 with 6 feature maps each for the first wide convolution layer, the second wide convolution layer has a filter window size of 5 with 14 feature maps each. Dropout rate ( INLINEFORM1 ) is 0.5, INLINEFORM2 constraint, learning rate is 0.1 and momentum of 0.9. Mini-batch size for STS Corpus is 100 and others are 4. In addition, training is done through stochastic gradient descent over shuffled mini-batches with Adadelta update rule BIBREF19 .",
"we use the publicly available Word2Vec trained from 100 billion words from Google and TwitterGlove of Stanford is performed on aggregated global word-word co-occurrence statistics from a corpus. Word2Vec has dimensionality of 300 and Twitter Glove have dimensionality of 200. Words that do not present in the set of pre-train words are initialized randomly."
],
[
"Table IV shows the result of our model for sentiment classification against other models. We compare our model performance with the approaches of BIBREF0 BIBREF5 on STS Corpus. BIBREF0 reported the results of Maximum Entropy (MaxEnt), NB, SVM on STS Corpus having good performance in previous time. The model of BIBREF5 is a state-of-the-art so far by using a CharSCNN. As can be seen, 86.63 is the best prediction accuracy of our model so far for the STS Corpus.",
"For Sanders and HCR datasets, we compare results with the model of BIBREF14 that used a ensemble of multiple base classifiers (ENS) such as NB, Random Forest (RF), SVM and Logistic Regression (LR). The ENS model is combined with bag-of-words (BoW), feature hashing (FH) and lexicons. The model of BIBREF14 is a state-of-the-art on Sanders and HCR datasets. Our models outperform the model of BIBREF14 for the Sanders dataset and HCR dataset."
],
[
"As can be seen, the models with SR outperforms the model with no SR. Semantic rules is effective in order to increase classification accuracy. We evaluate the efficiency of SR for the model in Table V of our full paper . We also conduct two experiments on two separate models: DeepCNN and Bi-LSTM in order to show the effectiveness of combination of DeepCNN and Bi-LSTM. In addition, the model using TwitterGlove outperform the model using GoogleW2V because TwitterGlove captures more information in Twitter than GoogleW2V. These results show that the character-level information and SR have a great impact on Twitter Data. The pre-train word vectors are good, universal feature extractors. The difference between our model and other approaches is the ability of our model to capture important features by using SR and combine these features at high benefit. The use of DeepCNN can learn a representation of words in higher abstract level. The combination of global character fixed-sized feature vectors and a word embedding helps the model to find important detectors for particles such as 'not' that negate sentiment and potentiate sentiment such as 'too', 'so' standing beside expected features. The model not only learns to recognize single n-grams, but also patterns in n-grams lead to form a structure significance of a sentence."
],
[
"In the present work, we have pointed out that the use of character embeddings through a DeepCNN to enhance information for word embeddings built on top of Word2Vec or TwitterGlove improves classification accuracy in Tweet sentiment classification. Our results add to the well-establish evidence that character vectors are an important ingredient for word-level in deep learning for NLP. In addition, semantic rules contribute handling non-essential sub-tweets in order to improve classification accuracy."
]
]
} | {
"question": [
"What were their results on the three datasets?",
"What was the baseline?",
"Which datasets did they use?",
"Are results reported only on English datasets?",
"Which three Twitter sentiment classification datasets are used for experiments?",
"What semantic rules are proposed?"
],
"question_id": [
"efb3a87845460655c53bd7365bcb8393c99358ec",
"0619fc797730a3e59ac146a5a4575c81517cc618",
"846a1992d66d955fa1747bca9a139141c19908e8",
"1ef8d1cb1199e1504b6b0daea52f2e4bd2ef7023",
"12d77ac09c659d2e04b5e3955a283101c3ad1058",
"d60a3887a0d434abc0861637bbcd9ad0c596caf4"
],
"nlp_background": [
"",
"",
"",
"five",
"five",
"five"
],
"topic_background": [
"",
"",
"",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"",
"",
"",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
"twitter",
"twitter",
"twitter"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "accuracy of 86.63 on STS, 85.14 on Sanders and 80.9 on HCR",
"evidence": [
"FLOAT SELECTED: Table IV ACCURACY OF DIFFERENT MODELS FOR BINARY CLASSIFICATION"
],
"highlighted_evidence": [
"FLOAT SELECTED: Table IV ACCURACY OF DIFFERENT MODELS FOR BINARY CLASSIFICATION"
]
}
],
"annotation_id": [
"0282506d82926af9792f42326478042758bdc913"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"We compare our model performance with the approaches of BIBREF0 BIBREF5 on STS Corpus. BIBREF0 reported the results of Maximum Entropy (MaxEnt), NB, SVM on STS Corpus having good performance in previous time. The model of BIBREF5 is a state-of-the-art so far by using a CharSCNN.",
"we compare results with the model of BIBREF14 that used a ensemble of multiple base classifiers (ENS) such as NB, Random Forest (RF), SVM and Logistic Regression (LR). The ENS model is combined with bag-of-words (BoW), feature hashing (FH) and lexicons. The model of BIBREF14 is a state-of-the-art on Sanders and HCR datasets. "
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Table IV shows the result of our model for sentiment classification against other models. We compare our model performance with the approaches of BIBREF0 BIBREF5 on STS Corpus. BIBREF0 reported the results of Maximum Entropy (MaxEnt), NB, SVM on STS Corpus having good performance in previous time. The model of BIBREF5 is a state-of-the-art so far by using a CharSCNN. As can be seen, 86.63 is the best prediction accuracy of our model so far for the STS Corpus.",
"For Sanders and HCR datasets, we compare results with the model of BIBREF14 that used a ensemble of multiple base classifiers (ENS) such as NB, Random Forest (RF), SVM and Logistic Regression (LR). The ENS model is combined with bag-of-words (BoW), feature hashing (FH) and lexicons. The model of BIBREF14 is a state-of-the-art on Sanders and HCR datasets. Our models outperform the model of BIBREF14 for the Sanders dataset and HCR dataset."
],
"highlighted_evidence": [
"Table IV shows the result of our model for sentiment classification against other models. We compare our model performance with the approaches of BIBREF0 BIBREF5 on STS Corpus. BIBREF0 reported the results of Maximum Entropy (MaxEnt), NB, SVM on STS Corpus having good performance in previous time. The model of BIBREF5 is a state-of-the-art so far by using a CharSCNN. As can be seen, 86.63 is the best prediction accuracy of our model so far for the STS Corpus.\n\nFor Sanders and HCR datasets, we compare results with the model of BIBREF14 that used a ensemble of multiple base classifiers (ENS) such as NB, Random Forest (RF), SVM and Logistic Regression (LR). The ENS model is combined with bag-of-words (BoW), feature hashing (FH) and lexicons. The model of BIBREF14 is a state-of-the-art on Sanders and HCR datasets. Our models outperform the model of BIBREF14 for the Sanders dataset and HCR dataset.\n\n"
]
}
],
"annotation_id": [
"740a7d8f2b75e1985ebefff16360d9b704eec6b3"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Stanford - Twitter Sentiment Corpus (STS Corpus)",
"Sanders - Twitter Sentiment Corpus",
"Health Care Reform (HCR)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .",
"Sanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.",
"Health Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 ."
],
"highlighted_evidence": [
"Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .\n\nSanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.\n\nHealth Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 ."
]
}
],
"annotation_id": [
"ecc705477bc9fc15949d2a0ca55fd5f2e129acfb"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .",
"Sanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.",
"Health Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 .",
"Table IV shows the result of our model for sentiment classification against other models. We compare our model performance with the approaches of BIBREF0 BIBREF5 on STS Corpus. BIBREF0 reported the results of Maximum Entropy (MaxEnt), NB, SVM on STS Corpus having good performance in previous time. The model of BIBREF5 is a state-of-the-art so far by using a CharSCNN. As can be seen, 86.63 is the best prediction accuracy of our model so far for the STS Corpus.",
"For Sanders and HCR datasets, we compare results with the model of BIBREF14 that used a ensemble of multiple base classifiers (ENS) such as NB, Random Forest (RF), SVM and Logistic Regression (LR). The ENS model is combined with bag-of-words (BoW), feature hashing (FH) and lexicons. The model of BIBREF14 is a state-of-the-art on Sanders and HCR datasets. Our models outperform the model of BIBREF14 for the Sanders dataset and HCR dataset."
],
"highlighted_evidence": [
"Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .\n\nSanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.\n\nHealth Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 .",
"Table IV shows the result of our model for sentiment classification against other models. We compare our model performance with the approaches of BIBREF0 BIBREF5 on STS Corpus. ",
"For Sanders and HCR datasets, we compare results with the model of BIBREF14 that used a ensemble of multiple base classifiers (ENS) such as NB, Random Forest (RF), SVM and Logistic Regression (LR). "
]
}
],
"annotation_id": [
"710ac11299a9dce0201ababcbffafc1dce9f905b"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Stanford - Twitter Sentiment Corpus (STS Corpus)",
"Sanders - Twitter Sentiment Corpus",
"Health Care Reform (HCR)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .",
"Sanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.",
"Health Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 ."
],
"highlighted_evidence": [
"Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .\n\nSanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.\n\nHealth Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 ."
]
}
],
"annotation_id": [
"de891b9e0b026bcc3d3fb336aceffb8a7228dbbd"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "rules that compute polarity of words after POS tagging or parsing steps",
"evidence": [
"In Twitter social networking, people express their opinions containing sub-sentences. These sub-sentences using specific PoS particles (Conjunction and Conjunctive adverbs), like \"but, while, however, despite, however\" have different polarities. However, the overall sentiment of tweets often focus on certain sub-sentences. For example:",
"@lonedog bwahahah...you are amazing! However, it was quite the letdown.",
"@kirstiealley my dentist is great but she's expensive...=(",
"In two tweets above, the overall sentiment is negative. However, the main sentiment is only in the sub-sentences following but and however. This inspires a processing step to remove unessential parts in a tweet. Rule-based approach can assists these problems in handling negation and dealing with specific PoS particles led to effectively affect the final output of classification BIBREF11 BIBREF16 . BIBREF11 summarized a full presentation of their semantic rules approach and devised ten semantic rules in their hybrid approach based on the presentation of BIBREF16 . We use five rules in the semantic rules set because other five rules are only used to compute polarity of words after POS tagging or Parsing steps. We follow the same naming convention for rules utilized by BIBREF11 to represent the rules utilized in our proposed method. The rules utilized in the proposed method are displayed in Table TABREF15 in which is included examples from STS Corpus and output after using the rules. Table TABREF16 illustrates the number of processed sentences on each dataset.",
"FLOAT SELECTED: Table I SEMANTIC RULES [12]"
],
"highlighted_evidence": [
"In Twitter social networking, people express their opinions containing sub-sentences. These sub-sentences using specific PoS particles (Conjunction and Conjunctive adverbs), like \"but, while, however, despite, however\" have different polarities. However, the overall sentiment of tweets often focus on certain sub-sentences. For example:\n\n@lonedog bwahahah...you are amazing! However, it was quite the letdown.\n\n@kirstiealley my dentist is great but she's expensive...=(\n\nIn two tweets above, the overall sentiment is negative. However, the main sentiment is only in the sub-sentences following but and however. This inspires a processing step to remove unessential parts in a tweet. Rule-based approach can assists these problems in handling negation and dealing with specific PoS particles led to effectively affect the final output of classification BIBREF11 BIBREF16 . BIBREF11 summarized a full presentation of their semantic rules approach and devised ten semantic rules in their hybrid approach based on the presentation of BIBREF16 . We use five rules in the semantic rules set because other five rules are only used to compute polarity of words after POS tagging or Parsing steps. We follow the same naming convention for rules utilized by BIBREF11 to represent the rules utilized in our proposed method. The rules utilized in the proposed method are displayed in Table TABREF15 in which is included examples from STS Corpus and output after using the rules. Table TABREF16 illustrates the number of processed sentences on each dataset.",
"FLOAT SELECTED: Table I SEMANTIC RULES [12]"
]
}
],
"annotation_id": [
"c59556729d9eaaff1c3e24854a7d78ff2255399d"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
]
} | {
"caption": [
"Figure 1. The overview of a deep learning system.",
"Table II THE NUMBER OF TWEETS ARE PROCESSED BY USING SEMANTIC RULES",
"Table I SEMANTIC RULES [12]",
"Figure 2. Deep Convolutional Neural Network (DeepCNN) for the sequence of character embeddings of a word. For example with 1 region size is 2 and 4 feature maps in the first convolution and 1 region size is 3 with 3 feature maps in the second convolution.",
"Table IV ACCURACY OF DIFFERENT MODELS FOR BINARY CLASSIFICATION",
"Table III SUMMARY STATISTICS FOR THE DATASETS AFTER USING SEMANTIC RULES. c: THE NUMBER OF CLASSES. N : THE NUMBER OF TWEETS. lw : MAXIMUM SENTENCE LENGTH. lc : MAXIMUM CHARACTER LENGTH. |Vw|: WORD ALPHABET SIZE. |Vc|: CHARACTER ALPHABET SIZE."
],
"file": [
"3-Figure1-1.png",
"3-TableII-1.png",
"3-TableI-1.png",
"4-Figure2-1.png",
"5-TableIV-1.png",
"5-TableIII-1.png"
]
} |
1811.01399 | Logic Attention Based Neighborhood Aggregation for Inductive Knowledge Graph Embedding | Knowledge graph embedding aims at modeling entities and relations with low-dimensional vectors. Most previous methods require that all entities should be seen during training, which is unpractical for real-world knowledge graphs with new entities emerging on a daily basis. Recent efforts on this issue suggest training a neighborhood aggregator in conjunction with the conventional entity and relation embeddings, which may help embed new entities inductively via their existing neighbors. However, their neighborhood aggregators neglect the unordered and unequal natures of an entity's neighbors. To this end, we summarize the desired properties that may lead to effective neighborhood aggregators. We also introduce a novel aggregator, namely, Logic Attention Network (LAN), which addresses the properties by aggregating neighbors with both rules- and network-based attention weights. By comparing with conventional aggregators on two knowledge graph completion tasks, we experimentally validate LAN's superiority in terms of the desired properties. | {
"section_name": [
"Introduction",
"Transductive Embedding Models",
"Inductive Embedding Models",
"Notations",
"Framework",
"Logic Attention Network",
"Incorporating Neighborhood Attention",
"Training Objective",
"Experimental Configurations",
"Data Construction",
"Experiments on Triplet Classification",
"Experimental Setup",
"Evaluation Results",
"Experiments on Link Prediction",
"Experimental Results",
"Case Studies on Neighbors' Weights",
"Conclusion",
"Acknowledgements"
],
"paragraphs": [
[
"Knowledge graphs (KGs) such as Freebase BIBREF0 , DBpedia BIBREF1 , and YAGO BIBREF2 play a critical role in various NLP tasks, including question answering BIBREF3 , information retrieval BIBREF4 , and personalized recommendation BIBREF5 . A typical KG consists of numerous facts about a predefined set of entities. Each fact is in the form of a triplet INLINEFORM0 (or INLINEFORM1 for short), where INLINEFORM2 and INLINEFORM3 are two entities and INLINEFORM4 is a relation the fact describes. Due to the discrete and incomplete natures of KGs, various KG embedding models are proposed to facilitate KG completion tasks, e.g., link prediction and triplet classification. After vectorizing entities and relations in a low-dimensional space, those models predict missing facts by manipulating the involved entity and relation embeddings.",
"Although proving successful in previous studies, traditional KG embedding models simply ignore the evolving nature of KGs. They require all entities to be present when training the embeddings. However, BIBREF6 shi2018open suggest that, on DBpedia, 200 new entities emerge on a daily basis between late 2015 and early 2016. Given the infeasibility of retraining embeddings from scratch whenever new entities come, missing facts about emerging entities are, unfortunately, not guaranteed to be inferred in time.",
"By transforming realistic networks, e.g., citation graphs, social networks, and protein interaction graphs, to simple graphs with single-typed and undirected edges, recent explorations BIBREF7 shed light on the evolution issue for homogeneous graphs. While learning embeddings for existing nodes, they inductively learn a neighborhood aggregator that represents a node by aggregating its neighbors' embeddings. The embeddings of unseen nodes can then be obtained by applying the aggregator on their existing neighbors.",
"It is well received that KGs differ from homogeneous graphs by their multi-relational structure BIBREF8 . Despite the difference, it seems promising to generalize the neighborhood aggregating scheme to embed emerging KG entities in an inductive manner. For example, in Figure FIGREF1 , a news article may describe an emerging entity (marked gray) as well as some facts involving existing entities. By generalizing structural information in the underlying KG, e.g., other entities residing in a similar neighborhood or involving similar relations, to the current entity's neighborhood, we can infer that it may probably live in Chicago.",
"Inspired by the above example, the inductive KG embedding problem boils down to designing a KG-specific neighborhood aggregator to capture essential neighborhood information. Intuitively, an ideal aggregator should have the following desired properties:",
"This paper concentrates on KG-specific neighborhood aggregators, which is of practical importance but only received limited focus BIBREF9 . To the best of our knowledge, neither conventional aggregators for homogeneous graphs nor those for KGs satisfy all the above three properties. In this regard, we employ the attention mechanism BIBREF10 and propose an aggregator called Logic Attention Network (LAN). Aggregating neighbors by a weighted combination of their transformed embeddings, LAN is inherently permutation invariant. To estimate the attention weights in LAN, we adopt two mechanisms to model relation- and neighbor-level information in a coarse-to-fine manner, At both levels, LAN is made aware of both neighborhood redundancy and query relation.",
"To summarize, our contributions are: (1) We propose three desired properties that decent neighborhood aggregators for KGs should possess. (2) We propose a novel aggregator, i.e., Logic Attention Network, to facilitate inductive KG embedding. (3) We conduct extensive comparisons with conventional aggregators on two KG completions tasks. The results validate the superiority of LAN w.r.t. the three properties."
],
[
"In recent years, representation learning problems on KGs have received much attention due to the wide applications of the resultant entity and relation embeddings. Typical KG embedding models include TransE BIBREF11 , Distmult BIBREF12 , Complex BIBREF13 , Analogy BIBREF14 , to name a few. For more explorations, we refer readers to an extensive survey BIBREF15 . However, conventional approaches on KG embedding work in a transductive manner. They require that all entities should be seen during training. Such limitation hinders them from efficiently generalizing to emerging entities."
],
[
"To relieve the issue of emerging entities, several inductive KG embedding models are proposed, including BIBREF16 xie2016representation, BIBREF6 shi2018open and BIBREF17 xie2016image which use description text or images as inputs. Although the resultant embeddings may be utilized for KG completion, it is not clear whether the embeddings are powerful enough to infer implicit or new facts beyond those expressed in the text/image. Moreover, when domain experts are recruited to introduce new entities via partial facts rather than text or images, those approaches may not help much.",
"In light of the above scenario, existing neighbors of an emerging entity are considered as another type of input for inductive models. In BIBREF9 ijcai2017-250, the authors propose applying Graph Neural Network (GNN) on the KG, which generates the embedding of a new entity by aggregating all its known neighbors. However, their model aggregates the neighbors via simple pooling functions, which neglects the difference among the neighbors. Other works like BIBREF18 fu2017hin2vec and BIBREF19 tang2015pte aim at embedding nodes for node classification given the entire graph and thus are inapplicable for inductive KG-specific tasks. BIBREF20 schlichtkrull2017modeling and BIBREF21 xiong2018one also rely on neighborhood structures to embed entities, but they either work transductively or focus on emerging relations.",
"Finally, we note another related line of studies on node representation learning for homogeneous graphs. Similar to text- or image-based inductive models for KGs, BIBREF22 duran2017learning, BIBREF23 yang2016revisiting, BIBREF24 velivckovic2017graph and BIBREF25 rossi2018deep exploit additional node attributes to embed unseen nodes. Another work more related to ours is BIBREF26 hamilton2017inductive. They tackle inductive node embedding by the neighborhood aggregation scheme. Their aggregators either trivially treat neighbors equally or unnecessarily require them to be ordered. Moreover, like all embedding models for homogeneous graphs, their model cannot be directly applied to KGs with multi-relational edges."
],
[
"Let INLINEFORM0 and INLINEFORM1 be two sets of entities and relations of size INLINEFORM2 and INLINEFORM3 , respectively. A knowledge graph is composed of a set of triplet facts, namely DISPLAYFORM0 ",
"For each INLINEFORM0 , we denote the reverse of INLINEFORM1 by INLINEFORM2 , and add an additional triplet INLINEFORM3 to INLINEFORM4 .",
"For an entity INLINEFORM0 , we denote by INLINEFORM1 its neighborhood in INLINEFORM2 , i.e., all related entities with the involved relations. Formally, DISPLAYFORM0 ",
"We denote the projection of INLINEFORM0 on INLINEFORM1 and INLINEFORM2 by INLINEFORM3 and INLINEFORM4 , respectively. Here INLINEFORM5 are neighbors and INLINEFORM6 are neighboring relations. When the context is clear, we simplify the INLINEFORM7 -th entity INLINEFORM8 by its subscript INLINEFORM9 . We denote vectors by bold lower letters, and matrices or sets of vectors by bold upper letters.",
"Given a knowledge graph INLINEFORM0 , we would like to learn a neighborhood aggregator INLINEFORM1 that acts as follows:",
"For an entity INLINEFORM0 on INLINEFORM1 , INLINEFORM2 depends on INLINEFORM3 's neighborhood INLINEFORM4 to embed INLINEFORM5 as a low-dimensional vector INLINEFORM6 ;",
"For an unknown triplet INLINEFORM0 , the embeddings of INLINEFORM1 and INLINEFORM2 output by INLINEFORM3 suggest the plausibility of the triplet.",
"When a new entity emerges with some triplets involving INLINEFORM0 and INLINEFORM1 , we could apply such an aggregator INLINEFORM2 on its newly established neighborhood, and use the output embedding to infer new facts about it."
],
[
"To obtain such a neighborhood aggregator INLINEFORM0 , we adopt an encoder-decoder framework as illustrated by Figure FIGREF12 . Given a training triplet, the encoder INLINEFORM1 encodes INLINEFORM2 and INLINEFORM3 into two embeddings with INLINEFORM4 . The decoder measures the plausibility of the triplet, and provides feedbacks to the encoder to adjust the parameters of INLINEFORM5 . In the remainder of this section, we describe general configurations of the two components.",
"As specified in Figure FIGREF12 , for an entity INLINEFORM0 on focus, the encoder works on a collection of input neighbor embeddings, and output INLINEFORM1 's embedding. To differentiate between input and output embeddings, we use superscripts INLINEFORM2 and INLINEFORM3 on the respective vectors. Let INLINEFORM4 , which is obtained from an embedding matrix INLINEFORM5 , be the embedding of a neighbor INLINEFORM6 , where INLINEFORM7 . To reflect the impact of relation INLINEFORM8 on INLINEFORM9 , we apply a relation-specific transforming function INLINEFORM10 on INLINEFORM11 as follows, DISPLAYFORM0 ",
"where INLINEFORM0 is the transforming vector for relation INLINEFORM1 and is restricted as a unit vector. We adopt this transformation from BIBREF27 wang2014knowledge since it does not involve matrix product operations and is of low computation complexity.",
"After neighbor embeddings are transformed, these transformed embeddings are fed to the aggregator INLINEFORM0 to output an embedding INLINEFORM1 for the target entity INLINEFORM2 , i.e., DISPLAYFORM0 ",
"By definition, an aggregator INLINEFORM0 essentially takes as input a collection of vectors INLINEFORM1 ( INLINEFORM2 ) and maps them to a single vector. With this observation, the following two types of functions seem to be natural choices for neighborhood aggregators, and have been adopted previously:",
"Pooling Functions. A typical pooling function is mean-pooling, which is defined by INLINEFORM0 . Besides mean-pooling, other previously adopted choices include sum- and max-pooling BIBREF9 . Due to their simple forms, pooling functions are permutation-invariant, but consider the neighbors equally. It is aware of neither potential redundancy in the neighborhood nor the query relations.",
"Recurrent Neural Networks (RNNs). In various natural language processing tasks, RNNs prove effective in modeling sequential dependencies. In BIBREF26 , the authors adopt an RNN variant LSTM BIBREF28 as neighborhood aggregator, i.e., INLINEFORM0 . To train and apply the LSTM-based aggregator, they have to randomly permute the neighbors, which violates the permutation variance property.",
"Given the subject and object embeddings INLINEFORM0 and INLINEFORM1 output by the encoder, the decoder is required to measure the plausibility of the training triplet. To avoid potential mixture with relations INLINEFORM2 in the neighborhood, we refer to the relation in the training triplet by query relation, and denote it by INLINEFORM3 instead. After looking up INLINEFORM4 's representation INLINEFORM5 from an embedding matrix INLINEFORM6 , the decoder scores the training triplet INLINEFORM7 with a scoring function INLINEFORM8 . Following BIBREF9 ijcai2017-250, we mainly investigate a scoring function based on TransE BIBREF11 defined by DISPLAYFORM0 ",
"where INLINEFORM0 denotes the L1 norm. To test whether the studied aggregators generalize among different scoring function, we will also consider several alternatives in experiments."
],
[
"As discussed above, traditional neighborhood aggregators do not preserve all desired properties. In this section, we describe a novel aggregator, namely Logic Attention Network (LAN), which addresses all three properties. We also provide details in training the LAN aggregator."
],
[
"Traditional neighborhood aggregators only depend on collections of transformed embeddings. They neglect other useful information in the neighborhood INLINEFORM0 and the query relation INLINEFORM1 , which may facilitate more effective aggregation of the transformed embeddings. To this end, we propose generalizing the aggregators from INLINEFORM2 to INLINEFORM3 .",
"Specifically, for an entity INLINEFORM0 , its neighbors INLINEFORM1 should contribute differently to INLINEFORM2 according to its importance in representing INLINEFORM3 . To consider the different contribution while preserving the permutation invariance property, we employ a weighted or attention-based aggregating approach on the transformed embeddings. The additional information in INLINEFORM4 and INLINEFORM5 is then exploited when estimating the attention weights. Formally, we obtain INLINEFORM6 by DISPLAYFORM0 ",
"Here INLINEFORM0 is the attention weight specified for each neighbor INLINEFORM1 given INLINEFORM2 and the query relation INLINEFORM3 .",
"To assign larger weights INLINEFORM0 to more important neighbors, from the perspective of INLINEFORM1 , we ask ourselves two questions at progressive levels: 1) What types of neighboring relations may lead us to potentially important neighbors? 2) Following those relations, which specific neighbor (in transformed embedding) may contain important information? Inspired by the two questions, we adopt the following two mechanisms to estimate INLINEFORM2 .",
"Relations in a KG are simply not independent of each other. For an entity INLINEFORM0 , one neighboring relation INLINEFORM1 may imply the existence of another neighboring relation INLINEFORM2 , though they may not necessarily connect INLINEFORM3 to the same neighbor. For example, a neighboring relation play_for may suggest the home city, i.e., live_in, of the current athlete entity. Following notations in logics, we denote potential dependency between INLINEFORM4 and INLINEFORM5 by a “logic rule” INLINEFORM6 . To measure the extent of such dependency, we define the confidence of a logic rule INLINEFORM7 as follows: DISPLAYFORM0 ",
"Here the function INLINEFORM0 equals 1 when INLINEFORM1 is true and 0 otherwise. As an empirical statistic over the entire KG, INLINEFORM2 is larger if more entities with neighboring relation INLINEFORM3 also have INLINEFORM4 as a neighboring relation.",
"With the confidence scores INLINEFORM0 between all relation pairs at hand, we are ready to characterize neighboring relations INLINEFORM1 that lead to important neighbors. On one hand, such a relation INLINEFORM2 should have a large INLINEFORM3 , i.e., it is statistically relevant to INLINEFORM4 . Following the above example, play_for should be consulted to if the query relation is live_in. On the other hand, INLINEFORM5 should not be implied by other relations in the neighborhood. For example, no matter whether the query relation is live_in or not, the neighboring relation work_as should not be assigned too much weight, because sufficient information is already provided by play_for.",
"Following the above intuitions, we implement the logic rule mechanism of measuring neighboring relations' usefulness as follow: DISPLAYFORM0 ",
"We note that INLINEFORM0 promotes relations INLINEFORM1 strongly implying INLINEFORM2 (the numerator) and demotes those implied by some other relation in the same neighborhood (the denominator). In this manner, our logic rule mechanism addresses both query relation awareness and neighborhood redundancy awareness.",
"With global statistics about relations, the logic rule mechanism guides the attention weight to be distributed at a coarse granularity of relations. However, it may be insufficient not to consult finer-grained information hidden in the transformed neighbor embeddings to determine which neighbor is important indeed. To take the transformed embeddings into consideration, we adopt an attention network BIBREF10 .",
"Specifically, given a query relation INLINEFORM0 , the importance of an entity INLINEFORM1 's neighbor INLINEFORM2 is measured by DISPLAYFORM0 ",
"Here the unnormalized attention weight INLINEFORM0 is given by an attention neural network as DISPLAYFORM0 ",
"In this equation, INLINEFORM0 and INLINEFORM1 are global attention parameters, while INLINEFORM2 is a relation-specific attention parameter for the query relation INLINEFORM3 . All those attention parameters are regarded as parameters of the encoder, and learned directly from the data.",
"Note that, unlike the logic rule mechanism at relation level, the computation of INLINEFORM0 concentrates more on the neighbor INLINEFORM1 itself. This is useful when the neighbor entity INLINEFORM2 is also helpful to explain the current training triplet. For example, in Figure FIGREF12 , the neighbor Chicago_Bulls could help to imply the object of live_in since there are other athletes playing for Chicago_Bulls while living in Chicago. Although working at the neighbor level, the dependency on transformed neighbor embeddings INLINEFORM3 and the relation-specific parameter INLINEFORM4 make INLINEFORM5 aware of both neighborhood redundancy and the query relation.",
"Finally, to incorporate these two weighting mechanisms together in measuring the importance of neighbors, we employ a double-view attention and reformulate Eq. ( EQREF22 ) as DISPLAYFORM0 "
],
[
"To train the entire model in Figure FIGREF12 , we need both positive triplets and negative ones. All triplets INLINEFORM0 from the knowledge graph naturally serve as positive triplets, which we denote by INLINEFORM1 . To make up for the absence of negative triplets, for each INLINEFORM2 , we randomly corrupt the object or subject (but not both) by another entity in INLINEFORM3 , and denote the corresponding negative triplets by INLINEFORM4 . Formally, DISPLAYFORM0 ",
"To encourage the decoder to give high scores for positive triplets and low scores for negative ones, we apply a margin-based ranking loss on each triplet INLINEFORM0 , i.e., DISPLAYFORM0 ",
"Here INLINEFORM0 denotes the positive part of x, and INLINEFORM1 is a hyper-parameter for the margin. Finally, the training objective is defined by DISPLAYFORM0 ",
"The above training objective only optimizes the output of the aggregator, i.e., the output entity embeddings INLINEFORM0 . The input entity embeddings INLINEFORM1 , however, are not directly aware of the structure of the entire KG. To make the input embeddings and thus the aggregation more meaningful, we set up a subtask for LAN.",
"First, we define a second scoring function, which is similar to Eq. ( EQREF20 ) except that input embeddings INLINEFORM0 from INLINEFORM1 are used to represent the subject and object, i.e., DISPLAYFORM0 ",
"The embedding of query relation INLINEFORM0 is obtained from the same embedding matrix INLINEFORM1 as in the first scoring function. Then a similar margin-based ranking loss INLINEFORM2 as Eq. ( EQREF32 ) is defined for the subtask. Finally, we combine the subtask with the main task, and reformulate the overall training objective of LAN as DISPLAYFORM0 "
],
[
"We evaluate the effectiveness of our LAN model on two typical knowledge graph completion tasks, i.e., link prediction and triplet classification. We compare our LAN with two baseline aggregators, MEAN and LSTM, as described in the Encoder section. MEAN is used on behalf of pooling functions since it leads to the best performance in BIBREF9 ijcai2017-250. LSTM is used due to its large expressive capability BIBREF26 ."
],
[
"In both tasks, we need datasets whose test sets contain new entities unseen during training. For the task of triplet classification, we directly use the datasets released by BIBREF9 ijcai2017-250 which are based on WordNet11 BIBREF29 . Since they do not conduct experiments on the link prediction task, we construct the required datasets based on FB15K BIBREF11 following a similar protocol used in BIBREF9 ijcai2017-250 as follows.",
"Sampling unseen entities. Firstly, we randomly sample INLINEFORM0 of the original testing triplets to form a new test set INLINEFORM1 for our inductive scenario ( BIBREF9 ijcai2017-250 samples INLINEFORM2 testing triplets). Then two different strategies are used to construct the candidate unseen entities INLINEFORM6 . One is called Subject, where only entities appearing as the subjects in INLINEFORM7 are added to INLINEFORM8 . Another is called Object, where only objects in INLINEFORM9 are added to INLINEFORM10 . For an entity INLINEFORM11 , if it does not have any neighbor in the original training set, such an entity is filtered out, yielding the final unseen entity set INLINEFORM12 . For a triplet INLINEFORM13 , if INLINEFORM14 or INLINEFORM15 , it is removed from INLINEFORM16 .",
"Filtering and splitting data sets. The second step is to ensure that unseen entities would not appear in final training set or validation set. We split the original training set into two data sets, the new training set and auxiliary set. For a triplet INLINEFORM0 in original training set, if INLINEFORM1 , it is added to the new training set. If INLINEFORM2 or INLINEFORM3 , it is added to the auxiliary set, which serves as existing neighbors for unseen entities in INLINEFORM4 .",
"Finally, for a triplet INLINEFORM0 in the original validation set, if INLINEFORM1 or INLINEFORM2 , it is removed from the validation set.",
"The statistics for the resulting INLINEFORM0 datasets using Subject and Object strategies are in Table TABREF34 ."
],
[
"Triplet classification aims at classifying a fact triplet INLINEFORM0 as true or false. In the dataset of BIBREF9 ijcai2017-250, triplets in the validation and testing sets are labeled as true or false, while triplets in the training set are all true ones.",
"To tackle this task, we preset a threshold INLINEFORM0 for each relation r. If INLINEFORM1 , the triplet is classified as positive, otherwise it is negative. We determine the optimal INLINEFORM2 by maximizing classification accuracy on the validation set."
],
[
"Since this task is also conducted in BIBREF9 ijcai2017-250, we use the same configurations with learning rate INLINEFORM0 , embedding dimension INLINEFORM1 , and margin INLINEFORM2 for all datasets. We randomly sample 64 neighbors for each entity. Zero padding is used when the number of neighbors is less than 64. L2-regularization is applied on the parameters of LAN. The regularization rate is INLINEFORM3 .",
"We search the best hyper-parameters of all models according to the performance on validation set. In detail, we search learning rate INLINEFORM0 in INLINEFORM1 , embedding dimension for neighbors INLINEFORM2 in INLINEFORM3 , and margin INLINEFORM4 in INLINEFORM5 . The optimal configurations are INLINEFORM6 for all the datasets."
],
[
"The results are reported in Table TABREF42 . Since we did not achieve the same results for MEAN as reported in BIBREF9 ijcai2017-250 with either our implementation or their released source code, the best results from their original paper are reported. From the table, we observe that, on one hand, LSTM results in poorer performance compared with MEAN, which involves fewer parameters though. This demonstrates the necessity of the permutation invariance for designing neighborhood aggregators for KGs. On the other hand, our LAN model consistently achieves the best results on all datasets, demonstrating the effectiveness of LAN on this KBC task."
],
[
"Link prediction in the inductive setting aims at reasoning the missing part “?” in a triplet when given INLINEFORM0 or INLINEFORM1 with emerging entities INLINEFORM2 or INLINEFORM3 respectively. To tackle the task, we firstly hide the object (subject) of each testing triplet in Subject-R (Object-R) to produce a missing part. Then we replace the missing part with all entities in the entity set INLINEFORM4 to construct candidate triplets. We compute the scoring function INLINEFORM5 defined in Eq. ( EQREF20 ) for all candidate triplets, and rank them in descending order. Finally, we evaluate whether the ground-truth entities are ranked ahead of other entities. We use traditional evaluation metrics as in the KG completion literature, i.e., Mean Rank (MR), Mean Reciprocal Rank (MRR), and the proportion of ground truth entities ranked top-k (Hits@k, INLINEFORM6 ). Since certain candidate triplets might also be true, we follow previous works and filter out these fake negatives before ranking."
],
[
"The results on Subject-10 and Object-10 are reported in Table TABREF43 . The results on other datasets are similar and we summarize them later in Figure FIGREF50 . From Table TABREF43 , we still observe consistent results for all the models as in the triplet classification task. Firstly, LSTM results in the poorest performance on all datasets. Secondly, our LAN model outperforms all the other baselines significantly, especially on the Hit@k metrics. The improvement on the MR metric of LAN might not be considerable. This is due to the flaw of the MR metric since it is more sensitive to lower positions of the ranking, which is actually of less importance. The MRR metric is proposed for this reason, where we could observe consistent improvements brought by LAN. The effectiveness of LAN on link prediction validates LAN's superiority to other aggregators and the necessities to treat the neighbors differently in a permutation invariant way. To analyze whether LAN outperforms the others for expected reasons and generalizes to other configurations, we conduct the following studies.",
"In this experiment, we would like to confirm that it's necessary for the aggregator to be aware of the query relation. Specifically, we investigate the attention neural network and design two degenerated baselines. One is referred to as Query-Attention and is simply an attention network as in LAN except that the logic rule mechanism is removed. The other is referred to as Global-Attention, which is also an attention network except that the query relation embedding INLINEFORM0 in Eq. ( EQREF28 ) is masked by a zero vector. The results are reported in Table TABREF46 . We observe that although superior to MEAN, Global-Attention is outperformed by Query-Attention, demonstrating the necessity of query relation awareness. The superiority of Global-Attention over MEAN could be attributed to the fact that the attention mechanism is effective to identify the neighbors which are globally important regardless of the query.",
"We find that the logic rules greatly help to improve the attention network in LAN. We confirm this point by conducting further experiments where the logic rule mechanism is isolated as a single model (referred to as Logic Rules Only). The results are also demonstrated in Table TABREF46 , from which we find that Query-Attention outperforms MEAN by a limited margin. Meanwhile, Logic Rules Only outperforms both MEAN and Query-Attention by significant margins. These results demonstrate the effectiveness of logic rules in assigning meaningful weights to the neighbors. Specifically, in order to generate representations for unseen entities, it is crucial to incorporate the logic rules to train the aggregator, instead of depending solely on neural networks to learn from the data. By combining the logic rules and neural networks, LAN takes a step further in outperforming all the other models.",
"To find out whether the superiority of LAN to the baselines can generalize to other scoring functions, we replace the scoring function in Eq. ( EQREF20 ) and Eq. ( EQREF36 ) by three typical scoring functions mentioned in Related Works. We omit the results of LSTM, for it is still inferior to MEAN. The results are listed in Table TABREF48 , from which we observe that with different scoring functions, LAN outperforms MEAN consistently by a large margin on all the evaluation metrics. Note that TransE leads to the best results on MEAN and LAN.",
"It's reasonable to suppose that when the ratio of the unseen entities over the training entities increases (namely the observed knowledge graph becomes sparser), all models' performance would deteriorate. To figure out whether our LAN could suffer less on sparse knowledge graphs, we conduct link prediction on datasets with different sample rates INLINEFORM0 as described in Step 1 of the Data Construction section. The results are displayed in Figure FIGREF50 . We observe that the increasing proportion of unseen entities certainly has a negative impact on all models. However, the performance of LAN does not decrease as drastically as that of MEAN and LSTM, indicating that LAN is more robust on sparse KGs."
],
[
"In order to visualize how LAN specifies weights to neighbors, we sample some cases from the Subject-10 testing set. From Table FIGREF50 , we have the following observations. First, with the query relation, LAN could attribute higher weights to neighbors with more relevant relations. In the first case, when the query is origin, the top two neighbors are involved by place_lived and breed_origin, which are helpful to imply origin. In addition, in all three cases, neighbors with relation gender gain the lowest weights since they imply nothing about the query relation. Second, LAN could attribute higher weights to neighbor entities that are more informative. When the query relation is profession, the neighbors Aristotle, Metaphysics and Aesthetics are all relevant to the answer Philosopher. In the third case, we also observe similar situations. Here, the neighbor with the highest weight is (institution, University_of_Calgary) since the query relation place_lived helps the aggregator to focus on the neighboring relation institution, then the neighbor entity University_of_Calgary assists in locating the answer Calgary."
],
[
"In this paper, we address inductive KG embedding, which helps embed emerging entities efficiently. We formulate three characteristics required for effective neighborhood aggregators. To meet the three characteristics, we propose LAN, which attributes different weights to an entity's neighbors in a permutation invariant manner, considering both the redundancy of neighbors and the query relation. The weights are estimated from data with logic rules at a coarse relation level, and neural attention network at a fine neighbor level. Experiments show that LAN outperforms baseline models significantly on two typical KG completion tasks."
],
[
"We thank the three anonymous authors for their constructive comments. This work is supported by the National Natural Science Foundation of China (61472453, U1401256, U1501252, U1611264, U1711261, U1711262)."
]
]
} | {
"question": [
"Which knowledge graph completion tasks do they experiment with?",
"Apart from using desired properties, do they evaluate their LAN approach in some other way?",
"Do they evaluate existing methods in terms of desired properties?"
],
"question_id": [
"69a7a6675c59a4c5fb70006523b9fe0f01ca415c",
"60cb756d382b3594d9e1f4a5e2366db407e378ae",
"352a1bf734b2d7f0618e9e2b0dbed4a3f1787160"
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"link prediction ",
"triplet classification"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We evaluate the effectiveness of our LAN model on two typical knowledge graph completion tasks, i.e., link prediction and triplet classification. We compare our LAN with two baseline aggregators, MEAN and LSTM, as described in the Encoder section. MEAN is used on behalf of pooling functions since it leads to the best performance in BIBREF9 ijcai2017-250. LSTM is used due to its large expressive capability BIBREF26 ."
],
"highlighted_evidence": [
"We evaluate the effectiveness of our LAN model on two typical knowledge graph completion tasks, i.e., link prediction and triplet classification."
]
}
],
"annotation_id": [
"dced57ccbfa5576db8c2d1123c6caf7a0e3091da"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"02b244dab9b2036ea25da4f626876617b3076186"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"620770f9a98f0c92a12b4f40952b43f254969d8d"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
]
} | {
"caption": [
"Figure 1: A motivating example of emerging KG entities. Dotted circles and arrows represent the existing KG while solid ones are brought by the emerging entity.",
"Figure 2: The encoder-decoder framework.",
"Table 1: Statistics of the processed FB15K dataset.",
"Table 2: Evaluation accuracy on triplet classification (%).",
"Table 3: Evaluation results for link prediction.",
"Table 4: Effectiveness of logic rules on Subject-10.",
"Table 5: Different scoring functions on Subject-10.",
"Table 6: The sample cases. The left column contains the emerging entity and the query relation. The middle column contains the neighbors ranked in a descending order according to the weights specified by LAN. The right column contains the ranked prediction from LAN and MEAN. The correct predictions are marked in bold."
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"6-Table4-1.png",
"6-Table5-1.png",
"7-Table6-1.png"
]
} |
1909.00124 | Learning with Noisy Labels for Sentence-level Sentiment Classification | Deep neural networks (DNNs) can fit (or even over-fit) the training data very well. If a DNN model is trained using data with noisy labels and tested on data with clean labels, the model may perform poorly. This paper studies the problem of learning with noisy labels for sentence-level sentiment classification. We propose a novel DNN model called NetAb (as shorthand for convolutional neural Networks with Ab-networks) to handle noisy labels during training. NetAb consists of two convolutional neural networks, one with a noise transition layer for dealing with the input noisy labels and the other for predicting 'clean' labels. We train the two networks using their respective loss functions in a mutual reinforcement manner. Experimental results demonstrate the effectiveness of the proposed model. | {
"section_name": [
"Introduction",
"Related Work",
"Proposed Model",
"Experiments",
"Conclusions",
"Acknowledgments"
],
"paragraphs": [
[
"It is well known that sentiment annotation or labeling is subjective BIBREF0. Annotators often have many disagreements. This is especially so for crowd-workers who are not well trained. That is why one always feels that there are many errors in an annotated dataset. In this paper, we study whether it is possible to build accurate sentiment classifiers even with noisy-labeled training data. Sentiment classification aims to classify a piece of text according to the polarity of the sentiment expressed in the text, e.g., positive or negative BIBREF1, BIBREF0, BIBREF2. In this work, we focus on sentence-level sentiment classification (SSC) with labeling errors.",
"As we will see in the experiment section, noisy labels in the training data can be highly damaging, especially for DNNs because they easily fit the training data and memorize their labels even when training data are corrupted with noisy labels BIBREF3. Collecting datasets annotated with clean labels is costly and time-consuming as DNN based models usually require a large number of training examples. Researchers and practitioners typically have to resort to crowdsourcing. However, as mentioned above, the crowdsourced annotations can be quite noisy. Research on learning with noisy labels dates back to 1980s BIBREF4. It is still vibrant today BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12 as it is highly challenging. We will discuss the related work in the next section.",
"This paper studies the problem of learning with noisy labels for SSC. Formally, we study the following problem.",
"Problem Definition: Given noisy labeled training sentences $S=\\lbrace (x_1,y_1),...,(x_n,y_n)\\rbrace $, where $x_i|_{i=1}^n$ is the $i$-th sentence and $y_i\\in \\lbrace 1,...,c\\rbrace $ is the sentiment label of this sentence, the noisy labeled sentences are used to train a DNN model for a SSC task. The trained model is then used to classify sentences with clean labels to one of the $c$ sentiment labels.",
"In this paper, we propose a convolutional neural Network with Ab-networks (NetAb) to deal with noisy labels during training, as shown in Figure FIGREF2. We will introduce the details in the subsequent sections. Basically, NetAb consists of two convolutional neural networks (CNNs) (see Figure FIGREF2), one for learning sentiment scores to predict `clean' labels and the other for learning a noise transition matrix to handle input noisy labels. We call the two CNNs A-network and Ab-network, respectively. The fundamental here is that (1) DNNs memorize easy instances first and gradually adapt to hard instances as training epochs increase BIBREF3, BIBREF13; and (2) noisy labels are theoretically flipped from the clean/true labels by a noise transition matrix BIBREF14, BIBREF15, BIBREF16, BIBREF17. We motivate and propose a CNN model with a transition layer to estimate the noise transition matrix for the input noisy labels, while exploiting another CNN to predict `clean' labels for the input training (and test) sentences. In training, we pre-train A-network in early epochs and then train Ab-network and A-network with their own loss functions in an alternating manner. To our knowledge, this is the first work that addresses the noisy label problem in sentence-level sentiment analysis. Our experimental results show that the proposed model outperforms the state-of-the-art methods."
],
[
"Our work is related to sentence sentiment classification (SSC). SSC has been studied extensively BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28. None of them can handle noisy labels. Since many social media datasets are noisy, researchers have tried to build robust models BIBREF29, BIBREF30, BIBREF31. However, they treat noisy data as additional information and don't specifically handle noisy labels. A noise-aware classification model in BIBREF12 trains using data annotated with multiple labels. BIBREF32 exploited the connection of users and noisy labels of sentiments in social networks. Since the two works use multiple-labeled data or users' information (we only use single-labeled data, and we do not use any additional information), they have different settings than ours.",
"Our work is closely related to DNNs based approaches to learning with noisy labels. DNNs based approaches explored three main directions: (1) training DNNs on selected samples BIBREF33, BIBREF34, BIBREF35, BIBREF17, (2) modifying the loss function of DNNs with regularization biases BIBREF5, BIBREF36, BIBREF37, BIBREF38, BIBREF39, BIBREF40, and (3) plugging an extra layer into DNNs BIBREF14, BIBREF41, BIBREF15, BIBREF16. All these approaches were proposed for image classification where training images were corrupted with noisy labels. Some of them require noise rate to be known a priori in order to tune their models during training BIBREF37, BIBREF17. Our approach combines direction (1) and direction (3), and trains two networks jointly without knowing the noise rate. We have used five latest existing methods in our experiments for SSC. The experimental results show that they are inferior to our proposed method. In addition, BIBREF42, BIBREF43, BIBREF44, BIBREF45, BIBREF46, and BIBREF47 studied weakly-supervised DNNs or semi-supervised DNNs. But they still need some clean-labeled training data. We use no clean-labeled data."
],
[
"Our model builds on CNN BIBREF25. The key idea is to train two CNNs alternately, one for addressing the input noisy labels and the other for predicting `clean' labels. The overall architecture of the proposed model is given in Figure FIGREF2. Before going further, we first introduce a proposition, a property, and an assumption below.",
"Proposition 1 Noisy labels are flipped from clean labels by an unknown noise transition matrix.",
"Proposition UNKREF3 is reformulated from BIBREF16 and has been investigated in BIBREF14, BIBREF15, BIBREF41. This proposition shows that if we know the noise transition matrix, we can use it to recover the clean labels. In other words, we can put noise transition matrix on clean labels to deal with noisy labels. Given these, we ask the following question: How to estimate such an unknown noise transition matrix?",
"Below we give a solution to this question based on the following property of DNNs.",
"Property 1 DNNs tend to prioritize memorization of simple instances first and then gradually memorize hard instances BIBREF3.",
"BIBREF13 further investigated this property of DNNs. Our setting is that simple instances are sentences of clean labels and hard instances are those with noisy labels. We also have the following assumption.",
"Assumption 1 The noise rate of the training data is less than $50\\%$.",
"This assumption is usually satisfied in practice because without it, it is hard to tackle the input noisy labels during training.",
"Based on the above preliminaries, we need to estimate the noisy transition matrix $Q\\in \\mathbb {R}^{c\\times c}$ ($c=2$ in our case, i.e., positive and negative), and train two classifiers $\\ddot{y}\\sim P(\\ddot{y}|x,\\theta )$ and $\\widehat{y}\\sim \\ P(\\widehat{y}|x,\\vartheta )$, where $x$ is an input sentence, $\\ddot{y}$ is its noisy label, $\\widehat{y}$ is its `clean' label, $\\theta $ and $\\vartheta $ are the parameters of two classifiers. Note that both $\\ddot{y}$ and $\\widehat{y}$ here are the prediction results from our model, not the input labels. We propose to formulate the probability of the sentence $x$ labeled as $j$ with",
"where $P(\\ddot{y}=j|\\widehat{y}=i)$ is an item (the $ji$-th item) in the noisy transition matrix $Q$. We can see that the noisy transition matrix $Q$ is exploited on the `clean' scores $P(\\widehat{y}|x,\\vartheta )$ to tackle noisy labels.",
"We now present our model NetAb and introduce how NetAb performs Eq. (DISPLAY_FORM6). As shown in Figure FIGREF2, NetAb consists of two CNNs. The intuition here is that we use one CNN to perform $P(\\widehat{y}=i|x,\\vartheta )$ and use another CNN to perform $P(\\ddot{y}=j|x,\\theta )$. Meanwhile, the CNN performing $P(\\ddot{y}=j|x,\\theta )$ estimates the noise transition matrix $Q$ to deal with noisy labels. Thus we add a transition layer into this CNN.",
"More precisely, in Figure FIGREF2, the CNN with a clean loss performs $P(\\widehat{y}=i|x,\\vartheta )$. We call this CNN the A-network. The other CNN with a noisy loss performs $P(\\ddot{y}=j|x,\\theta )$. We call this CNN the Ab-network. Ab-network shares all the parameters of A-network except the parameters from the Gate unit and the clean loss. In addition, Ab-network has a transition layer to estimate the noisy transition matrix $Q$. In such a way, A-network predict `clean' labels, and Ab-network handles the input noisy labels.",
"We use cross-entropy with the predicted labels $\\ddot{y}$ and the input labels $y$ (given in the dataset) to compute the noisy loss, formulated as below",
"where $\\mathbb {I}$ is the indicator function (if $y\\!==\\!i$, $\\mathbb {I}\\!=\\!1$; otherwise, $\\mathbb {I}\\!=\\!0$), and $|\\ddot{S}|$ is the number of sentences to train Ab-network in each batch.",
"Similarly, we use cross-entropy with the predicted labels $\\widehat{y}$ and the input labels $y$ to compute the clean loss, formulated as",
"where $|\\widehat{S}|$ is the number of sentences to train A-network in each batch.",
"Next we introduce how our model learns the parameters ($\\vartheta $, $\\theta $ and $Q$). An embedding matrix $v$ is produced for each sentence $x$ by looking up a pre-trained word embedding database (e.g., GloVe.840B BIBREF48). Then an encoding vector $h\\!=\\!CNN(v)$ (and $u\\!=\\!CNN(v)$) is produced for each embedding matrix $v$ in A-network (and Ab-network). A sofmax classifier gives us $P(\\hat{y}\\!=\\!i|x,\\vartheta )$ (i.e., `clean' sentiment scores) on the learned encoding vector $h$. As the noise transition matrix $Q$ indicates the transition values from clean labels to noisy labels, we compute $Q$ as follows",
"where $W_i$ is a trainable parameter matrix, $b_i$ and $f_i$ are two trainable parameter vectors. They are trained in the Ab-network. Finally, $P(\\ddot{y}=j|x,\\theta )$ is computed by Eq. (DISPLAY_FORM6).",
"In training, NetAb is trained end-to-end. Based on Proposition UNKREF3 and Property UNKREF4, we pre-train A-network in early epochs (e.g., 5 epochs). Then we train Ab-network and A-network in an alternating manner. The two networks are trained using their respective cross-entropy loss. Given a batch of sentences, we first train Ab-network. Then we use the scores predicted from A-network to select some possibly clean sentences from this batch and train A-network on the selected sentences. Specifically speaking, we use the predicted scores to compute sentiment labels by $\\arg \\max _i \\lbrace \\ddot{y}=i|\\ddot{y}\\sim P(\\ddot{y}|x,\\theta )\\rbrace $. Then we select the sentences whose resulting sentiment label equals to the input label. The selection process is marked by a Gate unit in Figure FIGREF2. When testing a sentence, we use A-network to produce the final classification result."
],
[
"In this section, we evaluate the performance of the proposed NetAb model. we conduct two types of experiments. (1) We corrupt clean-labeled datasets to produce noisy-labeled datasets to show the impact of noises on sentiment classification accuracy. (2) We collect some real noisy data and use them to train models to evaluate the performance of NetAb.",
"Clean-labeled Datasets. We use three clean labeled datasets. The first one is the movie sentence polarity dataset from BIBREF19. The other two datasets are laptop and restaurant datasets collected from SemEval-2016 . The former consists of laptop review sentences and the latter consists of restaurant review sentences. The original datasets (i.e., Laptop and Restaurant) were annotated with aspect polarity in each sentence. We used all sentences with only one polarity (positive or negative) for their aspects. That is, we only used sentences with aspects having the same sentiment label in each sentence. Thus, the sentiment of each aspect gives the ground-truth as the sentiments of all aspects are the same.",
"For each clean-labeled dataset, the sentences are randomly partitioned into training set and test set with $80\\%$ and $20\\%$, respectively. Following BIBREF25, We also randomly select $10\\%$ of the test data for validation to check the model during training. Summary statistics of the training, validation, and test data are shown in Table TABREF9.",
"Noisy-labeled Training Datasets. For the above three domains (movie, laptop, and restaurant), we collected 2,000 reviews for each domain from the same review source. We extracted sentences from each review and assigned review's label to its sentences. Like previous work, we treat 4 or 5 stars as positive and 1 or 2 stars as negative. The data is noisy because a positive (negative) review can contain negative (positive) sentences, and there are also neutral sentences. This gives us three noisy-labeled training datasets. We still use the same test sets as those for the clean-labeled datasets. Summary statistics of all the datasets are shown in Table TABREF9.",
"Experiment 1: Here we use the clean-labeled data (i.e., the last three columns in Table TABREF9). We corrupt the clean training data by switching the labels of some random instances based on a noise rate parameter. Then we use the corrupted data to train NetAb and CNN BIBREF25.",
"The test accuracy curves with the noise rates [0, $0.1$, $0.2$, $0.3$, $0.4$, $0.5$] are shown in Figure FIGREF13. From the figure, we can see that the test accuracy drops from around 0.8 to 0.5 when the noise rate increases from 0 to 0.5, but our NetAb outperforms CNN. The results clearly show that the performance of the CNN drops quite a lot with the noise rate increasing.",
"Experiment 2: Here we use the real noisy-labeled training data to train our model and the baselines, and then test on the test data in Table TABREF9. Our goal is two fold. First, we want to evaluate NetAb using real noisy data. Second, we want to see whether sentences with review level labels can be used to build effective SSC models.",
"Baselines. We use one strong non-DNN baseline, NBSVM (with unigrams or bigrams features) BIBREF23 and six DNN baselines. The first DNN baseline is CNN BIBREF25, which does not handle noisy labels. The other five were designed to handle noisy labels.",
"The comparison results are shown in Table TABREF12. From the results, we can make the following observations. (1) Our NetAb model achieves the best ACC and F1 on all datasets except for F1 of negative class on Laptop. The results demonstrate the superiority of NetAb. (2) NetAb outperforms the baselines designed for learning with noisy labels. These baselines are inferior to ours as they were tailored for image classification. Note that we found no existing method to deal with noisy labels for SSC. Training Details. We use the publicly available pre-trained embedding GloVe.840B BIBREF48 to initialize the word vectors and the embedding dimension is 300.",
"For each baseline, we obtain the system from its author and use its default parameters. As the DNN baselines (except CNN) were proposed for image classification, we change the input channels from 3 to 1. For our NetAb, we follow BIBREF25 to use window sizes of 3, 4 and 5 words with 100 feature maps per window size, resulting in 300-dimensional encoding vectors. The input length of sentence is set to 40. The network parameters are updated using the Adam optimizer BIBREF49 with a learning rate of 0.001. The learning rate is clipped gradually using a norm of 0.96 in performing the Adam optimization. The dropout rate is 0.5 in the input layer. The number of epochs is 200 and batch size is 50."
],
[
"This paper proposed a novel CNN based model for sentence-level sentiment classification learning for data with noisy labels. The proposed model learns to handle noisy labels during training by training two networks alternately. The learned noisy transition matrices are used to tackle noisy labels. Experimental results showed that the proposed model outperforms a wide range of baselines markedly. We believe that learning with noisy labels is a promising direction as it is often easy to collect noisy-labeled training data."
],
[
"Hao Wang and Yan Yang's work was partially supported by a grant from the National Natural Science Foundation of China (No. 61572407)."
]
]
} | {
"question": [
"How does the model differ from Generative Adversarial Networks?",
"What is the dataset used to train the model?",
"What is the performance of the model?",
"Is the model evaluated against a CNN baseline?"
],
"question_id": [
"045dbdbda5d96a672e5c69442e30dbf21917a1ee",
"c20b012ad31da46642c553ce462bc0aad56912db",
"13e87f6d68f7217fd14f4f9a008a65dd2a0ba91c",
"89b9a2389166b992c42ca19939d750d88c5fa79b"
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"search_query": [
"sentiment ",
"sentiment ",
"sentiment ",
"sentiment "
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"answers": [
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"36e4022e631bb303ba899a7b340d8024b3c5e19b"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" movie sentence polarity dataset from BIBREF19",
"laptop and restaurant datasets collected from SemEval-201",
"we collected 2,000 reviews for each domain from the same review source"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Clean-labeled Datasets. We use three clean labeled datasets. The first one is the movie sentence polarity dataset from BIBREF19. The other two datasets are laptop and restaurant datasets collected from SemEval-2016 . The former consists of laptop review sentences and the latter consists of restaurant review sentences. The original datasets (i.e., Laptop and Restaurant) were annotated with aspect polarity in each sentence. We used all sentences with only one polarity (positive or negative) for their aspects. That is, we only used sentences with aspects having the same sentiment label in each sentence. Thus, the sentiment of each aspect gives the ground-truth as the sentiments of all aspects are the same.",
"Noisy-labeled Training Datasets. For the above three domains (movie, laptop, and restaurant), we collected 2,000 reviews for each domain from the same review source. We extracted sentences from each review and assigned review's label to its sentences. Like previous work, we treat 4 or 5 stars as positive and 1 or 2 stars as negative. The data is noisy because a positive (negative) review can contain negative (positive) sentences, and there are also neutral sentences. This gives us three noisy-labeled training datasets. We still use the same test sets as those for the clean-labeled datasets. Summary statistics of all the datasets are shown in Table TABREF9."
],
"highlighted_evidence": [
"Clean-labeled Datasets. We use three clean labeled datasets. The first one is the movie sentence polarity dataset from BIBREF19. The other two datasets are laptop and restaurant datasets collected from SemEval-2016 .",
"Noisy-labeled Training Datasets. For the above three domains (movie, laptop, and restaurant), we collected 2,000 reviews for each domain from the same review source."
]
}
],
"annotation_id": [
"3f1a0f52b0d7249dab4b40e956e286785376f17f"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Experiment 1: ACC around 0.5 with 50% noise rate in worst case - clearly higher than baselines for all noise rates\nExperiment 2: ACC on real noisy datasets: 0.7 on Movie, 0.79 on Laptop, 0.86 on Restaurant (clearly higher than baselines in almost all cases)",
"evidence": [
"The test accuracy curves with the noise rates [0, $0.1$, $0.2$, $0.3$, $0.4$, $0.5$] are shown in Figure FIGREF13. From the figure, we can see that the test accuracy drops from around 0.8 to 0.5 when the noise rate increases from 0 to 0.5, but our NetAb outperforms CNN. The results clearly show that the performance of the CNN drops quite a lot with the noise rate increasing.",
"The comparison results are shown in Table TABREF12. From the results, we can make the following observations. (1) Our NetAb model achieves the best ACC and F1 on all datasets except for F1 of negative class on Laptop. The results demonstrate the superiority of NetAb. (2) NetAb outperforms the baselines designed for learning with noisy labels. These baselines are inferior to ours as they were tailored for image classification. Note that we found no existing method to deal with noisy labels for SSC. Training Details. We use the publicly available pre-trained embedding GloVe.840B BIBREF48 to initialize the word vectors and the embedding dimension is 300.",
"FLOAT SELECTED: Table 2: Accuracy (ACC) of both classes, F1 (F1 pos) of positive class and F1 (F1 neg) of negative class on clean test data/sentences. Training data are real noisy-labeled sentences.",
"FLOAT SELECTED: Figure 2: Accuracy (ACC) on clean test data. For training, the labels of clean data are flipped with the noise rates [0, 0.1, 0.2, 0.3, 0.4, 0.5]. For example, 0.1means that 10% of the labels are flipped. (Color online)"
],
"highlighted_evidence": [
"The test accuracy curves with the noise rates [0, $0.1$, $0.2$, $0.3$, $0.4$, $0.5$] are shown in Figure FIGREF13. From the figure, we can see that the test accuracy drops from around 0.8 to 0.5 when the noise rate increases from 0 to 0.5, but our NetAb outperforms CNN",
"The comparison results are shown in Table TABREF12. From the results, we can make the following observations. (1) Our NetAb model achieves the best ACC and F1 on all datasets except for F1 of negative class on Laptop. The results demonstrate the superiority of NetAb. (2) NetAb outperforms the baselines designed for learning with noisy labels.",
"FLOAT SELECTED: Table 2: Accuracy (ACC) of both classes, F1 (F1 pos) of positive class and F1 (F1 neg) of negative class on clean test data/sentences. Training data are real noisy-labeled sentences.",
"FLOAT SELECTED: Figure 2: Accuracy (ACC) on clean test data. For training, the labels of clean data are flipped with the noise rates [0, 0.1, 0.2, 0.3, 0.4, 0.5]. For example, 0.1means that 10% of the labels are flipped. (Color online)"
]
}
],
"annotation_id": [
"fbe5540e5e8051f9fbdadfcdf4b3c2f2fd62cfb6"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"Baselines. We use one strong non-DNN baseline, NBSVM (with unigrams or bigrams features) BIBREF23 and six DNN baselines. The first DNN baseline is CNN BIBREF25, which does not handle noisy labels. The other five were designed to handle noisy labels.",
"The comparison results are shown in Table TABREF12. From the results, we can make the following observations. (1) Our NetAb model achieves the best ACC and F1 on all datasets except for F1 of negative class on Laptop. The results demonstrate the superiority of NetAb. (2) NetAb outperforms the baselines designed for learning with noisy labels. These baselines are inferior to ours as they were tailored for image classification. Note that we found no existing method to deal with noisy labels for SSC. Training Details. We use the publicly available pre-trained embedding GloVe.840B BIBREF48 to initialize the word vectors and the embedding dimension is 300."
],
"highlighted_evidence": [
"Baselines. We use one strong non-DNN baseline, NBSVM (with unigrams or bigrams features) BIBREF23 and six DNN baselines. The first DNN baseline is CNN BIBREF25, which does not handle noisy labels. The other five were designed to handle noisy labels.\n\nThe comparison results are shown in Table TABREF12. From the results, we can make the following observations. (1) Our NetAb model achieves the best ACC and F1 on all datasets except for F1 of negative class on Laptop."
]
}
],
"annotation_id": [
"4a3b781469f48ce226c4af01c0e6f31e0c906298"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Figure 1: The proposed NETAB model (left) and its training method (right). Components in light gray color denote that these components are deactivated during training in that stage. (Color online)",
"Table 1: Summary statistics of the datasets. Number of positive (P) and negative (N) sentences in (noisy and clean) training data, validation data, and test data. The second column shows the statistics of sentences extracted from the 2,000 reviews of each dataset. The last three columns show the statistics of the sentences in three clean-labeled datasets, see “Clean-labeled Datasets”.",
"Table 2: Accuracy (ACC) of both classes, F1 (F1 pos) of positive class and F1 (F1 neg) of negative class on clean test data/sentences. Training data are real noisy-labeled sentences.",
"Figure 2: Accuracy (ACC) on clean test data. For training, the labels of clean data are flipped with the noise rates [0, 0.1, 0.2, 0.3, 0.4, 0.5]. For example, 0.1means that 10% of the labels are flipped. (Color online)"
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"5-Figure2-1.png"
]
} |
1909.00088 | Keep Calm and Switch On! Preserving Sentiment and Fluency in Semantic Text Exchange | In this paper, we present a novel method for measurably adjusting the semantics of text while preserving its sentiment and fluency, a task we call semantic text exchange. This is useful for text data augmentation and the semantic correction of text generated by chatbots and virtual assistants. We introduce a pipeline called SMERTI that combines entity replacement, similarity masking, and text infilling. We measure our pipeline's success by its Semantic Text Exchange Score (STES): the ability to preserve the original text's sentiment and fluency while adjusting semantic content. We propose to use masking (replacement) rate threshold as an adjustable parameter to control the amount of semantic change in the text. Our experiments demonstrate that SMERTI can outperform baseline models on Yelp reviews, Amazon reviews, and news headlines. | {
"section_name": [
"Introduction",
"Related Work ::: Word and Sentence-level Embeddings",
"Related Work ::: Text Infilling",
"Related Work ::: Style and Sentiment Transfer",
"Related Work ::: Review Generation",
"SMERTI ::: Overview",
"SMERTI ::: Entity Replacement Module (ERM)",
"SMERTI ::: Entity Replacement Module (ERM) ::: Stanford Parser",
"SMERTI ::: Entity Replacement Module (ERM) ::: Universal Sentence Encoder (USE)",
"SMERTI ::: Similarity Masking Module (SMM)",
"SMERTI ::: Text Infilling Module (TIM)",
"SMERTI ::: Text Infilling Module (TIM) ::: Bidirectional RNN with Attention",
"SMERTI ::: Text Infilling Module (TIM) ::: Transformer",
"Experiment ::: Datasets",
"Experiment ::: Experiment Details",
"Experiment ::: Baseline Models",
"Evaluation ::: Evaluation Setup",
"Evaluation ::: Key Evaluation Metrics",
"Evaluation ::: Semantic Text Exchange Score (STES)",
"Evaluation ::: Automatic Evaluation Results",
"Evaluation ::: Human Evaluation Setup",
"Evaluation ::: Human Evaluation Results",
"Analysis ::: Performance by Model",
"Analysis ::: Performance By Model - Human Results",
"Analysis ::: SMERTI's Performance By POS",
"Analysis ::: SMERTI's Performance By Dataset",
"Analysis ::: SMERTI's Performance By MRT/RRT",
"Conclusion and Future Work",
"Acknowledgments"
],
"paragraphs": [
[
"There has been significant research on style transfer, with the goal of changing the style of text while preserving its semantic content. The alternative where semantics are adjusted while keeping style intact, which we call semantic text exchange (STE), has not been investigated to the best of our knowledge. Consider the following example, where the replacement entity defines the new semantic context:",
"Original Text: It is sunny outside! Ugh, that means I must wear sunscreen. I hate being sweaty and sticky all over. Replacement Entity: weather = rainy Desired Text: It is rainy outside! Ugh, that means I must bring an umbrella. I hate being wet and having to carry it around.",
"The weather within the original text is sunny, whereas the actual weather may be rainy. Not only is the word sunny replaced with rainy, but the rest of the text's content is changed while preserving its negative sentiment and fluency. With the rise of natural language processing (NLP) has come an increased demand for massive amounts of text data. Manually collecting and scraping data requires a significant amount of time and effort, and data augmentation techniques for NLP are limited compared to fields such as computer vision. STE can be used for text data augmentation by producing various modifications of a piece of text that differ in semantic content.",
"Another use of STE is in building emotionally aligned chatbots and virtual assistants. This is useful for reasons such as marketing, overall enjoyment of interaction, and mental health therapy. However, due to limited data with emotional content in specific semantic contexts, the generated text may contain incorrect semantic content. STE can adjust text semantics (e.g. to align with reality or a specific task) while preserving emotions.",
"One specific example is the development of virtual assistants with adjustable socio-emotional personalities in the effort to construct assistive technologies for persons with cognitive disabilities. Adjusting the emotional delivery of text in subtle ways can have a strong effect on the adoption of the technologies BIBREF0. It is challenging to transfer style this subtly due to lack of datasets on specific topics with consistent emotions. Instead, large datasets of emotionally consistent interactions not confined to specific topics exist. Hence, it is effective to generate text with a particular emotion and then adjust its semantics.",
"We propose a pipeline called SMERTI (pronounced `smarty') for STE. Combining entity replacement (ER), similarity masking (SM), and text infilling (TI), SMERTI can modify the semantic content of text. We define a metric called the Semantic Text Exchange Score (STES) that evaluates the overall ability of a model to perform STE, and an adjustable parameter masking (replacement) rate threshold (MRT/RRT) that can be used to control the amount of semantic change.",
"We evaluate on three datasets: Yelp and Amazon reviews BIBREF1, and Kaggle news headlines BIBREF2. We implement three baseline models for comparison: Noun WordNet Semantic Text Exchange Model (NWN-STEM), General WordNet Semantic Text Exchange Model (GWN-STEM), and Word2Vec Semantic Text Exchange Model (W2V-STEM).",
"We illustrate the STE performance of two SMERTI variations on the datasets, demonstrating outperformance of the baselines and pipeline stability. We also run a human evaluation supporting our results. We analyze the results in detail and investigate relationships between the semantic change, fluency, sentiment, and MRT/RRT. Our major contributions can be summarized as:",
"We define a new task called semantic text exchange (STE) with increasing importance in NLP applications that modifies text semantics while preserving other aspects such as sentiment.",
"We propose a pipeline SMERTI capable of multi-word entity replacement and text infilling, and demonstrate its outperformance of baselines.",
"We define an evaluation metric for overall performance on semantic text exchange called the Semantic Text Exchange Score (STES)."
],
[
"Word2Vec BIBREF3, BIBREF4 allows for analogy representation through vector arithmetic. We implement a baseline (W2V-STEM) using this technique. The Universal Sentence Encoder (USE) BIBREF5 encodes sentences and is trained on a variety of web sources and the Stanford Natural Language Inference corpus BIBREF6. Flair embeddings BIBREF7 are based on architectures such as BERT BIBREF8. We use USE for SMERTI as it is designed for transfer learning and shows higher performance on textual similarity tasks compared to other models BIBREF9."
],
[
"Text infilling is the task of filling in missing parts of sentences called masks. MaskGAN BIBREF10 is restricted to a single word per mask token, while SMERTI is capable of variable length infilling for more flexible output. BIBREF11 uses a transformer-based architecture. They fill in random masks, while SMERTI fills in masks guided by semantic similarity, resulting in more natural infilling and fulfillment of the STE task."
],
[
"Notable works in style/sentiment transfer include BIBREF12, BIBREF13, BIBREF14, BIBREF15. They attempt to learn latent representations of various text aspects such as its context and attributes, or separate style from content and encode them into hidden representations. They then use an RNN decoder to generate a new sentence given a targeted sentiment attribute."
],
[
"BIBREF16 generates fake reviews from scratch using language models. BIBREF17, BIBREF18, BIBREF19 generate reviews from scratch given auxiliary information (e.g. the item category and star rating). BIBREF20 generates reviews using RNNs with two components: generation from scratch and review customization (Algorithm 2 in BIBREF20). They define review customization as modifying the generated review to fit a new topic or context, such as from a Japanese restaurant to an Italian one. They condition on a keyword identifying the desired context, and replace similar nouns with others using WordNet BIBREF21. They require a “reference dataset\" (required to be “on topic\"; easy enough for restaurant reviews, but less so for arbitrary conversational agents). As noted by BIBREF19, the method of BIBREF20 may also replace words independently of context. We implement their review customization algorithm (NWN-STEM) and a modified version (GWN-STEM) as baseline models."
],
[
"The task is to transform a corpus $C$ of lines of text $S_i$ and associated replacement entities $RE_i:C = \\lbrace (S_1,RE_1),(S_2,RE_2),\\ldots , (S_n, RE_n)\\rbrace $ to a modified corpus $\\hat{C} = \\lbrace \\hat{S}_1,\\hat{S}_2,\\ldots ,\\hat{S}_n\\rbrace $, where $\\hat{S}_i$ are the original text lines $S_i$ replaced with $RE_i$ and overall semantics adjusted. SMERTI consists of the following modules, shown in Figure FIGREF15:",
"Entity Replacement Module (ERM): Identify which word(s) within the original text are best replaced with the $RE$, which we call the Original Entity ($OE$). We replace $OE$ in $S$ with $RE$. We call this modified text $S^{\\prime }$.",
"Similarity Masking Module (SMM): Identify words/phrases in $S^{\\prime }$ similar to $OE$ and replace them with a [mask]. Group adjacent [mask]s into a single one so we can fill a variable length of text into each. We call this masked text $S^{\\prime \\prime }$.",
"Text Infilling Module (TIM): Fill in [mask] tokens with text that better suits the $RE$. This will modify semantics in the rest of the text. This final output text is called $\\hat{S}$."
],
[
"For entity replacement, we use a combination of the Universal Sentence Encoder BIBREF5 and Stanford Parser BIBREF22."
],
[
"The Stanford Parser is a constituency parser that determines the grammatical structure of sentences, including phrases and part-of-speech (POS) labelling. By feeding our $RE$ through the parser, we are able to determine its parse-tree. Iterating through the parse-tree and its sub-trees, we can obtain a list of constituent tags for the $RE$. We then feed our input text $S$ through the parser, and through a similar process, we can obtain a list of leaves (where leaves under a single label are concatenated) that are equal or similar to any of the $RE$ constituent tags. This generates a list of entities having the same (or similar) grammatical structure as the $RE$, and are likely candidates for the $OE$. We then feed these entities along with the $RE$ into the Universal Sentence Encoder (USE)."
],
[
"The USE is a sentence-level embedding model that comes with a deep averaging network (DAN) and transformer model BIBREF5. We choose the transformer model as these embeddings take context into account, and the exact same word/phrase will have a different embedding depending on its context and surrounding words.",
"We compute the semantic similarity between two embeddings $u$ and $v$: $sim(u,v)$, using the angular (cosine) distance, defined as: $\\cos (\\theta _{u,v}) = (u\\cdot v)/(||u|| ||v||)$, such that $sim(u,v) = 1-\\frac{1}{\\pi }arccos(\\cos (\\theta _{u,v}))$. Results are in $[0,1]$, with higher values representing greater similarity.",
"Using USE and the above equation, we can identify words/phrases within the input text $S$ which are most similar to $RE$. To assist with this, we use the Stanford Parser as described above to obtain a list of candidate entities. In the rare case that this list is empty, we feed in each word of $S$ into USE, and identify which word is the most similar to $RE$. We then replace the most similar entity or word ($OE$) with the $RE$ and generate $S^{\\prime }$.",
"An example of this entity replacement process is in Figure FIGREF18. Two parse-trees are shown: for $RE$ (a) and $S$ (b) and (c). Figure FIGREF18(d) is a semantic similarity heat-map generated from the USE embeddings of the candidate $OE$s and $RE$, where values are similarity scores in the range $[0,1]$.",
"As seen in Figure FIGREF18(d), we calculate semantic similarities between $RE$ and entities within $S$ which have noun constituency tags. Looking at the row for our $RE$ restaurant, the most similar entity (excluding itself) is hotel. We can then generate:",
"$S^{\\prime }$ = i love this restaurant ! the beds are comfortable and the service is great !"
],
[
"Next, we mask words similar to $OE$ to generate $S^{\\prime \\prime }$ using USE. We look at semantic similarities between every word in $S$ and $OE$, along with semantic similarities between $OE$ and the candidate entities determined in the previous ERM step to broaden the range of phrases our module can mask. We ignore $RE$, $OE$, and any entities or phrases containing $OE$ (for example, `this hotel').",
"After determining words similar to the $OE$ (discussed below), we replace each of them with a [mask] token. Next, we replace [mask] tokens adjacent to each other with a single [mask].",
"We set a base similarity threshold (ST) that selects a subset of words to mask. We compare the actual fraction of masked words to the masking rate threshold (MRT), as defined by the user, and increase ST in intervals of $0.05$ until the actual masking rate falls below the MRT. Some sample masked outputs ($S^{\\prime \\prime }$) using various MRT-ST combinations for the previous example are shown in Table TABREF21 (more examples in Appendix A).",
"The MRT is similar to the temperature parameter used to control the “novelty” of generated text in works such as BIBREF20. A high MRT means the user wants to generate text very semantically dissimilar to the original, and may be desired in cases such as creating a lively chatbot or correcting text that is heavily incorrect semantically. A low MRT means the user wants to generate text semantically similar to the original, and may be desired in cases such as text recovery, grammar correction, or correcting a minor semantic error in text. By varying the MRT, various pieces of text that differ semantically in subtle ways can be generated, assisting greatly with text data augmentation. The MRT also affects sentiment and fluency, as we show in Section SECREF59."
],
[
"We use two seq2seq models for our TIM: an RNN (recurrent neural network) model BIBREF23 (called SMERTI-RNN), and a transformer model (called SMERTI-Transformer)."
],
[
"We use a bidirectional variant of the GRU BIBREF24, and hence two RNNs for the encoder: one reads the input sequence in standard sequential order, and the other is fed this sequence in reverse. The outputs are summed at each time step, giving us the ability to encode information from both past and future context.",
"The decoder generates the output in a sequential token-by-token manner. To combat information loss, we implement the attention mechanism BIBREF25. We use a Luong attention layer BIBREF26 which uses global attention, where all the encoder's hidden states are considered, and use the decoder's current time-step hidden state to calculate attention weights. We use the dot score function for attention, where $h_t$ is the current target decoder state and $\\bar{h}_s$ is all encoder states: $score(h_t,\\bar{h}_s)=h_t^T\\bar{h}_s$."
],
[
"Our second model makes use of the transformer architecture, and our implementation replicates BIBREF27. We use an encoder-decoder structure with a multi-head self-attention token decoder to condition on information from both past and future context. It maps a query and set of key-value pairs to an output. The queries and keys are of dimension $d_k$, and values of dimension $d_v$. To compute the attention, we pack a set of queries, keys, and values into matrices $Q$, $K$, and $V$, respectively. The matrix of outputs is computed as:",
"",
"Multi-head attention allows the model to jointly attend to information from different positions. The decoder can make use of both local and global semantic information while filling in each [mask]."
],
[
"We train our two TIMs on the three datasets. The Amazon dataset BIBREF1 contains over 83 million user reviews on products, with duplicate reviews removed. The Yelp dataset includes over six million user reviews on businesses. The news headlines dataset from Kaggle contains approximately $200,000$ news headlines from 2012 to 2018 obtained from HuffPost BIBREF2.",
"We filter the text to obtain reviews and headlines which are English, do not contain hyperlinks and other obvious noise, and are less than 20 words long. We found that many longer than twenty words ramble on and are too verbose for our purposes. Rather than filtering by individual sentences we keep each text in its entirety so SMERTI can learn to generate multiple sentences at once. We preprocess the text by lowercasing and removing rare/duplicate punctuation and space.",
"For Amazon and Yelp, we treat reviews greater than three stars as containing positive sentiment, equal to three stars as neutral, and less than three stars as negative. For each training and testing set, we include an equal number of randomly selected positive and negative reviews, and half as many neutral reviews. This is because neutral reviews only occupy one out of five stars compared to positive and negative which occupy two each. Our dataset statistics can be found in Appendix B."
],
[
"To set up our training and testing data for text infilling, we mask the text. We use a tiered masking approach: for each dataset, we randomly mask 15% of the words in one-third of the lines, 30% of the words in another one-third, and 45% in the remaining one-third. These masked texts serve as the inputs, while the original texts serve as the ground-truth. This allows our TIM models to learn relationships between masked words and relationships between masked and unmasked words.",
"The bidirectional RNN decoder fills in blanks one by one, with the objective of minimizing the cross entropy loss between its output and the ground-truth. We use a hidden size of 500, two layers for the encoder and decoder, teacher-forcing ratio of 1.0, learning rate of 0.0001, dropout of 0.1, batch size of 64, and train for up to 40 epochs.",
"For the transformer, we use scaled dot-product attention and the same hyperparameters as BIBREF27. We use the Adam optimizer BIBREF28 with $\\beta _1 = 0.9, \\beta _2 = 0.98$, and $\\epsilon = 10^{-9}$. As in BIBREF27, we increase the $learning\\_rate$ linearly for the first $warmup\\_steps$ training steps, and then decrease the $learning\\_rate$ proportionally to the inverse square root of the step number. We set $factor=1$ and use $warmup\\_steps = 2000$. We use a batch size of 4096, and we train for up to 40 epochs."
],
[
"We implement three models to benchmark against. First is NWN-STEM (Algorithm 2 from BIBREF20). We use the training sets as the “reference review sets\" to extract similar nouns to the $RE$ (using MINsim = 0.1). We then replace nouns in the text similar to the $RE$ with nouns extracted from the associated reference review set.",
"Secondly, we modify NWN-STEM to work for verbs and adjectives, and call this GWN-STEM. From the reference review sets, we extract similar nouns, verbs, and adjectives to the $RE$ (using MINsim = 0.1), where the $RE$ is now not restricted to being a noun. We replace nouns, verbs, and adjectives in the text similar to the $RE$ with those extracted from the associated reference review set.",
"Lastly, we implement W2V-STEM using Gensim BIBREF29. We train uni-gram Word2Vec models for single word $RE$s, and four-gram models for phrases. Models are trained on the training sets. We use cosine similarity to determine the most similar word/phrase in the input text to $RE$, which is the replaced $OE$. For all other words/phrases, we calculate $w_{i}^{\\prime } = w_{i} - w_{OE} + w_{RE}$, where $w_{i}$ is the original word/phrase's embedding vector, $w_{OE}$ is the $OE$'s, $w_{RE}$ is the $RE$'s, and $w_{i}^{\\prime }$ is the resulting embedding vector. The replacement word/phrase is $w_{i}^{\\prime }$'s nearest neighbour. We use similarity thresholds to adjust replacement rates (RR) and produce text under various replacement rate thresholds (RRT)."
],
[
"We manually select 10 nouns, 10 verbs, 10 adjectives, and 5 phrases from the top 10% most frequent words/phrases in each test set as our evaluation $RE$s. We filter the verbs and adjectives through a list of sentiment words BIBREF30 to ensure we do not choose $RE$s that would obviously significantly alter the text's sentiment.",
"For each evaluation $RE$, we choose one-hundred lines from the corresponding test set that does not already contain $RE$. We choose lines with at least five words, as many with less carry little semantic meaning (e.g. `Great!', `It is okay'). For Amazon and Yelp, we choose 50 positive and 50 negative lines per $RE$. We repeat this process three times, resulting in three sets of 1000 lines per dataset per POS (excluding phrases), and three sets of 500 lines per dataset for phrases. Our final results are averaged metrics over these three sets.",
"For SMERTI-Transformer, SMERTI-RNN, and W2V-STEM, we generate four outputs per text for MRT/RRT of 20%, 40%, 60%, and 80%, which represent upper-bounds on the percentage of the input that can be masked and/or replaced. Note that NWN-STEM and GWN-STEM can only evaluate on limited POS and their maximum replacement rates are limited. We select MINsim values of 0.075 and 0 for nouns and 0.1 and 0 for verbs, as these result in replacement rates approximately equal to the actual MR/RR of the other models' outputs for 20% and 40% MRT/RRT, respectively."
],
[
"Fluency (SLOR) We use syntactic log-odds ratio (SLOR) BIBREF31 for sentence level fluency and modify from their word-level formula to character-level ($SLOR_{c}$). We use Flair perplexity values from a language model trained on the One Billion Words corpus BIBREF32:",
"where $|S|$ and $|w|$ are the character lengths of the input text $S$ and the word $w$, respectively, $p_M(S)$ and $p_M(w)$ are the probabilities of $S$ and $w$ under the language model $M$, respectively, and $PPL_S$ and $PPL_w$ are the character-level perplexities of $S$ and $w$, respectively. SLOR (from hereon we refer to character-level SLOR as simply SLOR) measures aspects of text fluency such as grammaticality. Higher values represent higher fluency.",
"We rescale resulting SLOR values to the interval [0,1] by first fitting and normalizing a Gaussian distribution. We then truncate normalized data points outside [-3,3], which shifts approximately 0.69% of total data. Finally, we divide each data point by six and add 0.5 to each result.",
"Sentiment Preservation Accuracy (SPA) is defined as the percentage of outputs that carry the same sentiment as the input. We use VADER BIBREF33 to evaluate sentiment as positive, negative, or neutral. It handles typos, emojis, and other aspects of online text. Content Similarity Score (CSS) ranges from 0 to 1 and indicates the semantic similarity between generated text and the $RE$. A value closer to 1 indicates stronger semantic exchange, as the output is closer in semantic content to the $RE$. We also use the USE for this due to its design and strong performance as previously mentioned."
],
[
"We come up with a single score to evaluate overall performance of a model on STE that combines the key evaluation metrics. It uses the harmonic mean, similar to the F1 score (or F-score) BIBREF34, BIBREF35, and we call it the Semantic Text Exchange Score (STES):",
"where $A$ is SPA, $B$ is SLOR, and $C$ is CSS. STES ranges between 0 and 1, with scores closer to 1 representing higher overall performance. Like the F1 score, STES penalizes models which perform very poorly in one or more metrics, and favors balanced models achieving strong results in all three."
],
[
"Table TABREF38 shows overall average results by model. Table TABREF41 shows outputs for a Yelp example.",
"As observed from Table TABREF41 (see also Appendix F), SMERTI is able to generate high quality output text similar to the $RE$ while flowing better than other models' outputs. It can replace entire phrases and sentences due to its variable length infilling. Note that for nouns, the outputs from GWN-STEM and NWN-STEM are equivalent."
],
[
"We conduct a human evaluation with eight participants, 6 males and 2 females, that are affiliated project researchers aged 20-39 at the University of Waterloo. We randomly choose one evaluation line for a randomly selected word or phrase for each POS per dataset. The input text and each model's output (for 40% MRT/RRT - chosen as a good middle ground) for each line is presented to participants, resulting in a total of 54 pieces of text, and rated on the following criteria from 1-5:",
"RE Match: “How related is the entire text to the concept of [X]\", where [X] is a word or phrase (1 - not at all related, 3 - somewhat related, 5 - very related). Note here that [X] is a given $RE$.",
"Fluency: “Does the text make sense and flow well?\" (1 - not at all, 3 - somewhat, 5 - very)",
"Sentiment: “How do you think the author of the text was feeling?\" (1 - very negative, 3 - neutral, 5 - very positive)",
"Each participant evaluates every piece of text. They are presented with a single piece of text at a time, with the order of models, POS, and datasets completely randomized."
],
[
"Average human evaluation scores are displayed in Table TABREF50. Sentiment Preservation (between 0 and 1) is calculated by comparing the average Sentiment rating for each model's output text to the Sentiment rating of the input text, and if both are less than 2.5 (negative), between 2.5 and 3.5 inclusive (neutral), or greater than 3.5 (positive), this is counted as a valid case of Sentiment Preservation. We repeat this for every evaluation line to calculate the final values per model. Harmonic means of all three metrics (using rescaled 0-1 values of RE Match and Fluency) are also displayed."
],
[
"As seen in Table TABREF38, both SMERTI variations achieve higher STES and outperform the other models overall, with the WordNet models performing the worst. SMERTI excels especially on fluency and content similarity. The transformer variation achieves slightly higher SLOR, while the RNN variation achieves slightly higher CSS. The WordNet models perform strongest in sentiment preservation (SPA), likely because they modify little of the text and only verbs and nouns. They achieve by far the lowest CSS, likely in part due to this limited text replacement. They also do not account for context, and many words (e.g. proper nouns) do not exist in WordNet. Overall, the WordNet models are not very effective at STE.",
"W2V-STEM achieves the lowest SLOR, especially for higher RRT, as supported by the example in Table TABREF41 (see also Appendix F). W2V-STEM and WordNet models output grammatically incorrect text that flows poorly. In many cases, words are repeated multiple times. We analyze the average Type Token Ratio (TTR) values of each model's outputs, which is the ratio of unique divided by total words. As shown in Table TABREF52, the SMERTI variations achieve the highest TTR, while W2V-STEM and NWN-STEM the lowest.",
"Note that while W2V-STEM achieves lower CSS than SMERTI, it performs comparably in this aspect. This is likely due to its vector arithmetic operations algorithm, which replaces each word with one more similar to the RE. This is also supported by the lower TTR, as W2V-STEM frequently outputs the same words multiple times."
],
[
"As seen in Table TABREF50, the SMERTI variations outperform all baseline models overall, particularly in RE Match. SMERTI-Transformer performs the best, with SMERTI-RNN second. The WordNet models achieve high Sentiment Preservation, but much lower on RE Match. W2V-STEM achieves comparably high RE Match, but lowest Fluency.",
"These results correspond well with our automatic evaluation results in Table TABREF38. We look at the Pearson correlation values between RE Match, Fluency, and Sentiment Preservation with CSS, SLOR, and SPA, respectively. These are 0.9952, 0.9327, and 0.8768, respectively, demonstrating that our automatic metrics are highly effective and correspond well with human ratings."
],
[
"As seen from Table TABREF55 , SMERTI's SPA values are highest for nouns, likely because they typically carry little sentiment, and lowest for adjectives, likely because they typically carry the most.",
"SLOR is lowest for adjectives and highest for phrases and nouns. Adjectives typically carry less semantic meaning and SMERTI likely has more trouble figuring out how best to infill the text. In contrast, nouns typically carry more, and phrases the most (since they consist of multiple words).",
"SMERTI's CSS is highest for phrases then nouns, likely due to phrases and nouns carrying more semantic meaning, making it easier to generate semantically similar text. Both SMERTI's and the input text's CSS are lowest for adjectives, likely because they carry little semantic meaning.",
"Overall, SMERTI appears to be more effective on nouns and phrases than verbs and adjectives."
],
[
"As seen in Table TABREF58, SMERTI's SPA is lowest for news headlines. Amazon and Yelp reviews naturally carry stronger sentiment, likely making it easier to generate text with similar sentiment.",
"Both SMERTI's and the input text's SLOR appear to be lower for Yelp reviews. This may be due to many reasons, such as more typos and emojis within the original reviews, and so forth.",
"SMERTI's CSS values are slightly higher for news headlines. This may be due to them typically being shorter and carrying more semantic meaning as they are designed to be attention grabbers.",
"Overall, it seems that using datasets which inherently carry more sentiment will lead to better sentiment preservation. Further, the quality of the dataset's original text, unsurprisingly, influences the ability of SMERTI to generate fluent text."
],
[
"From Table TABREF60, it can be seen that as MRT/RRT increases, SMERTI's SPA and SLOR decrease while CSS increases. These relationships are very strong as supported by the Pearson correlation values of -0.9972, -0.9183, and 0.9078, respectively. When SMERTI can alter more text, it has the opportunity to replace more related to sentiment while producing more of semantic similarity to the $RE$.",
"Further, SMERTI generates more of the text itself, becoming less similar to the human-written input, resulting in lower fluency. To further demonstrate this, we look at average SMERTI BLEU BIBREF36 scores against MRT/RRT, shown in Table TABREF60. BLEU generally indicates how close two pieces of text are in content and structure, with higher values indicating greater similarity. We report our final BLEU scores as the average scores of 1 to 4-grams. As expected, BLEU decreases as MRT/RRT increases, and this relationship is very strong as supported by the Pearson correlation value of -0.9960.",
"It is clear that MRT/RRT represents a trade-off between CSS against SPA and SLOR. It is thus an adjustable parameter that can be used to control the generated text, and balance semantic exchange against fluency and sentiment preservation."
],
[
"We introduced the task of semantic text exchange (STE), demonstrated that our pipeline SMERTI performs well on STE, and proposed an STES metric for evaluating overall STE performance. SMERTI outperformed other models and was the most balanced overall. We also showed a trade-off between semantic exchange against fluency and sentiment preservation, which can be controlled by the masking (replacement) rate threshold.",
"Potential directions for future work include adding specific methods to control sentiment, and fine-tuning SMERTI for preservation of persona or personality. Experimenting with other text infilling models (e.g. fine-tuning BERT BIBREF8) is also an area of exploration. Lastly, our human evaluation is limited in size and a larger and more diverse participant pool is needed.",
"We conclude by addressing potential ethical misuses of STE, including assisting in the generation of spam and fake-reviews/news. These risks come with any intelligent chatbot work, but we feel that the benefits, including usage in the detection of misuse such as fake-news, greatly outweigh the risks and help progress NLP and AI research."
],
[
"We thank our anonymous reviewers, study participants, and Huawei Technologies Co., Ltd. for financial support."
]
]
} | {
"question": [
"Does the model proposed beat the baseline models for all the values of the masking parameter tested?",
"Has STES been previously used in the literature to evaluate similar tasks?",
"What are the baseline models mentioned in the paper?"
],
"question_id": [
"dccc3b182861fd19ccce5bd00ce9c3f40451ed6e",
"98ba7a7aae388b1a77dd6cab890977251d906359",
"3da9a861dfa25ed486cff0ef657d398fdebf8a93"
],
"nlp_background": [
"two",
"two",
"two"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"sentiment ",
"sentiment ",
"sentiment "
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"FLOAT SELECTED: Figure 4: Graph of average results by MRT/RRT",
"FLOAT SELECTED: Table 9: Average results by MRT/RRT"
],
"highlighted_evidence": [
"FLOAT SELECTED: Figure 4: Graph of average results by MRT/RRT",
"FLOAT SELECTED: Table 9: Average results by MRT/RRT"
]
}
],
"annotation_id": [
"3db00a97604b4b372f6e78460f42925c42f13417"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"We propose a pipeline called SMERTI (pronounced `smarty') for STE. Combining entity replacement (ER), similarity masking (SM), and text infilling (TI), SMERTI can modify the semantic content of text. We define a metric called the Semantic Text Exchange Score (STES) that evaluates the overall ability of a model to perform STE, and an adjustable parameter masking (replacement) rate threshold (MRT/RRT) that can be used to control the amount of semantic change."
],
"highlighted_evidence": [
"We define a metric called the Semantic Text Exchange Score (STES) that evaluates the overall ability of a model to perform STE, and an adjustable parameter masking (replacement) rate threshold (MRT/RRT) that can be used to control the amount of semantic change."
]
}
],
"annotation_id": [
"5add668cc603080a7775920a6a8cf6d8b3b55f0d"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Noun WordNet Semantic Text Exchange Model (NWN-STEM)",
"General WordNet Semantic Text Exchange Model (GWN-STEM)",
"Word2Vec Semantic Text Exchange Model (W2V-STEM)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We evaluate on three datasets: Yelp and Amazon reviews BIBREF1, and Kaggle news headlines BIBREF2. We implement three baseline models for comparison: Noun WordNet Semantic Text Exchange Model (NWN-STEM), General WordNet Semantic Text Exchange Model (GWN-STEM), and Word2Vec Semantic Text Exchange Model (W2V-STEM)."
],
"highlighted_evidence": [
"We implement three baseline models for comparison: Noun WordNet Semantic Text Exchange Model (NWN-STEM), General WordNet Semantic Text Exchange Model (GWN-STEM), and Word2Vec Semantic Text Exchange Model (W2V-STEM)."
]
}
],
"annotation_id": [
"02eb95f2257705e792f44f13751c644179727831"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} | {
"caption": [
"Table 1: Example masked outputs. S is the original input text; RE is the replacement entity; S′′1 corresponds to MRT = 0.2, base ST = 0.4; S ′′ 2 corresponds to MRT = 0.4, base ST = 0.3; S′′3 corresponds to MRT = 0.6, base ST = 0.2; S ′′ 4 corresponds to MRT = 0.8, base ST = 0.1",
"Table 2: Training and testing splits by dataset",
"Table 3: Chosen evaluation noun REs. *Obama does not exist in WordNet, so we instead use the word President for NWN-STEM and GWN-STEM.",
"Figure 1: Graph of overall average results (referring to the data found in Table 2 of the main body)",
"Table 4: Chosen evaluation verb REs",
"Table 5: Chosen evaluation adjective REs",
"Table 6: Chosen evaluation phrase REs",
"Table 7: Average results by POS",
"Figure 2: Graph of average results by POS",
"Table 8: Average results by dataset",
"Figure 3: Graph of average results by dataset",
"Table 9: Average results by MRT/RRT",
"Figure 4: Graph of average results by MRT/RRT",
"Table 10: Example outputs for an Amazon evaluation line with noun RE",
"Table 11: Example outputs for an Amazon evaluation line with verb RE",
"Table 12: Example outputs for an Amazon evaluation line with adjective RE",
"Table 13: Example outputs for an Amazon evaluation line with phrase RE",
"Table 14: Example outputs for a Yelp evaluation line with noun RE",
"Table 15: Example outputs for a Yelp evaluation line with verb RE",
"Table 16: Example outputs for a Yelp evaluation line with adjective RE",
"Table 17: Example outputs for a Yelp evaluation line with phrase RE",
"Table 18: Example outputs for a news headlines evaluation line with noun RE",
"Table 19: Example outputs for a news headlines evaluation line with verb RE",
"Table 20: Example outputs for a news headlines evaluation line with adjective RE",
"Table 21: Example outputs for a news headlines evaluation line with phrase RE"
],
"file": [
"13-Table1-1.png",
"14-Table2-1.png",
"15-Table3-1.png",
"15-Figure1-1.png",
"15-Table4-1.png",
"15-Table5-1.png",
"15-Table6-1.png",
"16-Table7-1.png",
"16-Figure2-1.png",
"17-Table8-1.png",
"17-Figure3-1.png",
"18-Table9-1.png",
"18-Figure4-1.png",
"19-Table10-1.png",
"20-Table11-1.png",
"21-Table12-1.png",
"21-Table13-1.png",
"22-Table14-1.png",
"23-Table15-1.png",
"24-Table16-1.png",
"24-Table17-1.png",
"25-Table18-1.png",
"25-Table19-1.png",
"26-Table20-1.png",
"26-Table21-1.png"
]
} |
1911.01799 | CN-CELEB: a challenging Chinese speaker recognition dataset | Recently, researchers set an ambitious goal of conducting speaker recognition in unconstrained conditions where the variations on ambient, channel and emotion could be arbitrary. However, most publicly available datasets are collected under constrained environments, i.e., with little noise and limited channel variation. These datasets tend to deliver over optimistic performance and do not meet the request of research on speaker recognition in unconstrained conditions. In this paper, we present CN-Celeb, a large-scale speaker recognition dataset collected `in the wild'. This dataset contains more than 130,000 utterances from 1,000 Chinese celebrities, and covers 11 different genres in real world. Experiments conducted with two state-of-the-art speaker recognition approaches (i-vector and x-vector) show that the performance on CN-Celeb is far inferior to the one obtained on VoxCeleb, a widely used speaker recognition dataset. This result demonstrates that in real-life conditions, the performance of existing techniques might be much worse than it was thought. Our database is free for researchers and can be downloaded from this http URL. | {
"section_name": [
"Introduction",
"The CN-Celeb dataset ::: Data description",
"The CN-Celeb dataset ::: Challenges with CN-Celeb",
"The CN-Celeb dataset ::: Collection pipeline",
"Experiments on speaker recognition",
"Experiments on speaker recognition ::: Data",
"Experiments on speaker recognition ::: Settings",
"Experiments on speaker recognition ::: Basic results",
"Experiments on speaker recognition ::: Further comparison",
"Conclusions"
],
"paragraphs": [
[
"Speaker recognition including identification and verification, aims to recognize claimed identities of speakers. After decades of research, performance of speaker recognition systems has been vastly improved, and the technique has been deployed to a wide range of practical applications. Nevertheless, the present speaker recognition approaches are still far from reliable in unconstrained conditions where uncertainties within the speech recordings could be arbitrary. These uncertainties might be caused by multiple factors, including free text, multiple channels, environmental noises, speaking styles, and physiological status. These uncertainties make the speaker recognition task highly challenging BIBREF0, BIBREF1.",
"Researchers have devoted much effort to address the difficulties in unconstrained conditions. Early methods are based on probabilistic models that treat these uncertainties as an additive Gaussian noise. JFA BIBREF2, BIBREF3 and PLDA BIBREF4 are the most famous among such models. These models, however, are shallow and linear, and therefore cannot deal with the complexity of real-life applications. Recent advance in deep learning methods offers a new opportunity BIBREF5, BIBREF6, BIBREF7, BIBREF8. Resorting to the power of deep neural networks (DNNs) in representation learning, these methods can remove unwanted uncertainties by propagating speech signals through the DNN layer by layer and retain speaker-relevant features only BIBREF9. Significant improvement in robustness has been achieved by the DNN-based approach BIBREF10, which makes it more suitable for applications in unconstrained conditions.",
"The success of DNN-based methods, however, largely relies on a large amount of data, in particular data that involve the true complexity in unconstrained conditions. Unfortunately, most existing datasets for speaker recognition are collected in constrained conditions, where the acoustic environment, channel and speaking style do not change significantly for each speaker BIBREF11, BIBREF12, BIBREF13. These datasets tend to deliver over optimistic performance and do not meet the request of research on speaker recognition in unconstrained conditions.",
"To address this shortage in datasets, researchers have started to collect data `in the wild'. The most successful `wild' dataset may be VoxCeleb BIBREF14, BIBREF15, which contains millions of utterances from over thousands of speakers. The utterances were collected from open-source media using a fully automated pipeline based on computer vision techniques, in particular face detection, tracking and recognition, plus video-audio synchronization. The automated pipeline is almost costless, and thus greatly improves the efficiency of data collection.",
"In this paper, we re-implement the automated pipeline of VoxCeleb and collect a new large-scale speaker dataset, named CN-Celeb. Compared with VoxCeleb, CN-Celeb has three distinct features:",
"CN-Celeb specially focuses on Chinese celebrities, and contains more than $130,000$ utterances from $1,000$ persons.",
"CN-Celeb covers more genres of speech. We intentionally collected data from 11 genres, including entertainment, interview, singing, play, movie, vlog, live broadcast, speech, drama, recitation and advertisement. The speech of a particular speaker may be in more than 5 genres. As a comparison, most of the utterances in VoxCeleb were extracted from interview videos. The diversity in genres makes our database more representative for the true scenarios in unconstrained conditions, but also more challenging.",
"CN-Celeb is not fully automated, but involves human check. We found that more complex the genre is, more errors the automated pipeline tends to produce. Ironically, the error-pron segments could be highly valuable as they tend to be boundary samples. We therefore choose a two-stage strategy that employs the automated pipeline to perform pre-selection, and then perform human check.",
"The rest of the paper is organized as follows. Section SECREF2 presents a detailed description for CN-Celeb, and Section SECREF3 presents more quantitative comparisons between CN-Celeb and VoxCeleb on the speaker recognition task. Section SECREF4 concludes the entire paper."
],
[
"The original purpose of the CN-Celeb dataset is to investigate the true difficulties of speaker recognition techniques in unconstrained conditions, and provide a resource for researchers to build prototype systems and evaluate the performance. Ideally, it can be used as a standalone data source, and can be also used with other datasets together, in particular VoxCeleb which is free and large. For this reason, CN-Celeb tries to be distinguished from but also complementary to VoxCeleb from the beginning of the design. This leads to three features that we have discussed in the previous section: Chinese focused, complex genres, and quality guarantee by human check.",
"In summary, CN-Celeb contains over $130,000$ utterances from $1,000$ Chinese celebrities. It covers 11 genres and the total amount of speech waveforms is 274 hours. Table TABREF5 gives the data distribution over the genres, and Table TABREF6 presents the data distribution over the length of utterances."
],
[
"Table TABREF13 summarizes the main difference between CN-Celeb and VoxCeleb. Compared to VoxCeleb, CN-Celeb is a more complex dataset and more challenging for speaker recognition research. More details of these challenges are as follows.",
"Most of the utterances involve real-world noise, including ambient noise, background babbling, music, cheers and laugh.",
"A certain amount of utterances involve strong and overlapped background speakers, especially in the dram and movie genres.",
"Most of speakers have different genres of utterances, which results in significant variation in speaking styles.",
"The utterances of the same speaker may be recorded at different time and with different devices, leading to serious cross-time and cross-channel problems.",
"Most of the utterances are short, which meets the scenarios of most real applications but leads to unreliable decision."
],
[
"CN-Celeb was collected following a two-stage strategy: firstly we used an automated pipeline to extract potential segments of the Person of Interest (POI), and then applied a human check to remove incorrect segments. This process is much faster than purely human-based segmentation, and reduces errors caused by a purely automated process.",
"Briefly, the automated pipeline we used is similar to the one used to collect VoxCeleb1 BIBREF14 and VoxCeleb2 BIBREF15, though we made some modification to increase efficiency and precision. Especially, we introduced a new face-speaker double check step that fused the information from both the image and speech signals to increase the recall rate while maintaining the precision.",
"The detailed steps of the collection process are summarized as follows.",
"STEP 1. POI list design. We manually selected $1,000$ Chinese celebrities as our target speakers. These speakers were mostly from the entertainment sector, such as singers, drama actors/actrees, news reporters, interviewers. Region diversity was also taken into account so that variation in accent was covered.",
"STEP 2. Pictures and videos download. Pictures and videos of the $1,000$ POIs were downloaded from the data source (https://www.bilibili.com/) by searching for the names of the persons. In order to specify that we were searching for POI names, the word `human' was added in the search queries. The downloaded videos were manually examined and were categorized into the 11 genres.",
"STEP 3. Face detection and tracking. For each POI, we first obtained the portrait of the person. This was achieved by detecting and clipping the face images from all pictures of that person. The RetinaFace algorithm was used to perform the detection and clipping BIBREF16. Afterwards, video segments that contain the target person were extracted. This was achieved by three steps: (1) For each frame, detect all the faces appearing in the frame using RetinaFace; (2) Determine if the target person appears by comparing the POI portrait and the faces detected in the frame. We used the ArcFace face recognition system BIBREF17 to perform the comparison; (3) Apply the MOSSE face tracking system BIBREF18 to produce face streams.",
"STEP 4. Active speaker verification. As in BIBREF14, an active speaker verification system was employed to verify if the speech was really spoken by the target person. This is necessary as it is possible that the target person appears in the video but the speech is from other persons. We used the SyncNet model BIBREF19 as in BIBREF14 to perform the task. This model was trained to detect if a stream of mouth movement and a stream of speech are synchronized. In our implementation, the stream of mouth movement was derived from the face stream produced by the MOSSE system.",
"STEP 5. Double check by speaker recognition.",
"Although SyncNet worked well for videos in simple genres, it failed for videos of complex genres such as movie and vlog. A possible reason is that the video content of these genres may change dramatically in time, which leads to unreliable estimation for the stream of the mouth movement, hence unreliable synchronization detection. In order to improve the robustness of the active speaker verification in complex genres, we introduced a double check procedure based on speaker recognition. The idea is simple: whenever the speaker recognition system states a very low confidence for the target speaker, the segment will be discarded even if the confidence from SyncNet is high; vice versa, if the speaker recognition system states a very high confidence, the segment will be retained. We used an off-the-shelf speaker recognition system BIBREF20 to perform this double check. In our study, this double check improved the recall rate by 30% absolutely.",
"STEP 6. Human check.",
"The segments produced by the above automated pipeline were finally checked by human. According to our experience, this human check is rather efficient: one could check 1 hour of speech in 1 hour. As a comparison, if we do not apply the automated pre-selection, checking 1 hour of speech requires 4 hours."
],
[
"In this section, we present a series of experiments on speaker recognition using VoxCeleb and CN-Celeb, to compare the complexity of the two datasets."
],
[
"VoxCeleb: The entire dataset involves two parts: VoxCeleb1 and VoxCeleb2. We used SITW BIBREF21, a subset of VoxCeleb1 as the evaluation set. The rest of VoxCeleb1 was merged with VoxCeleb2 to form the training set (simply denoted by VoxCeleb). The training set involves $1,236,567$ utterances from $7,185$ speakers, and the evaluation set involves $6,445$ utterances from 299 speakers (precisely, this is the Eval. Core set within SITW).",
"CN-Celeb: The entire dataset was split into two parts: the first part CN-Celeb(T) involves $111,260$ utterances from 800 speakers and was used as the training set; the second part CN-Celeb(E) involves $18,849$ utterances from 200 speakers and was used as the evaluation set."
],
[
"Two state-of-the-art baseline systems were built following the Kaldi SITW recipe BIBREF22: an i-vector system BIBREF3 and an x-vector system BIBREF10.",
"For the i-vector system, the acoustic feature involved 24-dimensional MFCCs plus the log energy, augmented by the first- and second-order derivatives. We also applied the cepstral mean normalization (CMN) and the energy-based voice active detection (VAD). The universal background model (UBM) consisted of $2,048$ Gaussian components, and the dimensionality of the i-vector space was 400. LDA was applied to reduce the dimensionality of the i-vectors to 150. The PLDA model was used for scoring BIBREF4.",
"For the x-vector system, the feature-learning component was a 5-layer time-delay neural network (TDNN). The slicing parameters for the five time-delay layers were: {$t$-2, $t$-1, $t$, $t$+1, $t$+2}, {$t$-2, $t$, $t$+2}, {$t$-3, $t$, $t$+3}, {$t$}, {$t$}. The statistic pooling layer computed the mean and standard deviation of the frame-level features from a speech segment. The size of the output layer was consistent with the number of speakers in the training set. Once trained, the activations of the penultimate hidden layer were read out as x-vectors. In our experiments, the dimension of the x-vectors trained on VoxCeleb was set to 512, while for CN-Celeb, it was set to 256, considering the less number of speakers in the training set. Afterwards, the x-vectors were projected to 150-dimensional vectors by LDA, and finally the PLDA model was employed to score the trials. Refer to BIBREF10 for more details."
],
[
"We first present the basic results evaluated on SITW and CN-Celeb(E). Both the front-end (i-vector or x-vector models) and back-end (LDA-PLDA) models were trained with the VoxCeleb training set. Note that for SITW, the averaged length of the utterances is more than 80 seconds, while this number is about 8 seconds for CN-Celeb(E). For a better comparison, we resegmented the data of SITW and created a new dataset denoted by SITW(S), where the averaged lengths of the enrollment and test utterances are 28 and 8 seconds, respectively. These numbers are similar to the statistics of CN-Celeb(E).",
"The results in terms of the equal error rate (EER) are reported in Table TABREF24. It can be observed that for both the i-vector system and the x-vector system, the performance on CN-Celeb(E) is much worse than the performance on SITW and SITW(S). This indicates that there is big difference between these two datasets. From another perspective, it demonstrates that the model trained with VoxCeleb does not generalize well, although it has achieved reasonable performance on data from a similar source (SITW)."
],
[
"To further compare CN-Celeb and VoxCeleb in a quantitative way, we built systems based on CN-Celeb and VoxCeleb, respectively. For a fair comparison, we randomly sampled 800 speakers from VoxCeleb and built a new dataset VoxCeleb(L) whose size is comparable to CN-Celeb(T). This data set was used for back-end (LDA-PLDA) training.",
"The experimental results are shown in Table TABREF26. Note that the performance of all the comparative experiments show the same trend with the i-vector system and the x-vector system, we therefore only analyze the i-vector results.",
"Firstly, it can be seen that the system trained purely on VoxCeleb obtained good performance on SITW(S) (1st row). This is understandable as VoxCeleb and SITW(S) were collected from the same source. For the pure CN-Celeb system (2nd row), although CN-Celeb(T) and CN-Celeb(E) are from the same source, the performance is still poor (14.24%). More importantly, with re-training the back-end model with VoxCeleb(L) (4th row), the performance on SITW becomes better than the same-source result on CN-Celeb(E) (11.34% vs 14.24%). All these results reconfirmed the significant difference between the two datasets, and indicates that CN-Celeb is more challenging than VoxCeleb."
],
[
"We introduced a free dataset CN-Celeb for speaker recognition research. The dataset contains more than $130k$ utterances from $1,000$ Chinese celebrities, and covers 11 different genres in real world. We compared CN-Celeb and VoxCeleb, a widely used dataset in speaker recognition, by setting up a series of experiments based on two state-of-the-art speaker recognition models. Experimental results demonstrated that CN-Celeb is significantly different from VoxCeleb, and it is more challenging for speaker recognition research. The EER performance we obtained in this paper suggests that in unconstrained conditions, the performance of the current speaker recognition techniques might be much worse than it was thought."
]
]
} | {
"question": [
"What was the performance of both approaches on their dataset?",
"What kind of settings do the utterances come from?",
"What genres are covered?",
"Do they experiment with cross-genre setups?",
"Which of the two speech recognition models works better overall on CN-Celeb?",
"By how much is performance on CN-Celeb inferior to performance on VoxCeleb?"
],
"question_id": [
"8c0a0747a970f6ea607ff9b18cfeb738502d9a95",
"529dabe7b4a8a01b20ee099701834b60fb0c43b0",
"a2be2bd84e5ae85de2ab9968147b3d49c84dfb7f",
"5699996a7a2bb62c68c1e62e730cabf1e3186eef",
"944d5dbe0cfc64bf41ea36c11b1d378c408d40b8",
"327e6c6609fbd4c6ae76284ca639951f03eb4a4c"
],
"nlp_background": [
"",
"",
"",
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"",
"",
"",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"",
"",
"",
"no",
"no",
"no"
],
"search_query": [
"dataset",
"dataset",
"dataset",
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "ERR of 19.05 with i-vectors and 15.52 with x-vectors",
"evidence": [
"FLOAT SELECTED: Table 4. EER(%) results of the i-vector and x-vector systems trained on VoxCeleb and evaluated on three evaluation sets."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 4. EER(%) results of the i-vector and x-vector systems trained on VoxCeleb and evaluated on three evaluation sets."
]
}
],
"annotation_id": [
"45270b732239f93ee0e569f36984323d0dde8fd6"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"entertainment, interview, singing, play, movie, vlog, live broadcast, speech, drama, recitation and advertisement"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"CN-Celeb specially focuses on Chinese celebrities, and contains more than $130,000$ utterances from $1,000$ persons.",
"CN-Celeb covers more genres of speech. We intentionally collected data from 11 genres, including entertainment, interview, singing, play, movie, vlog, live broadcast, speech, drama, recitation and advertisement. The speech of a particular speaker may be in more than 5 genres. As a comparison, most of the utterances in VoxCeleb were extracted from interview videos. The diversity in genres makes our database more representative for the true scenarios in unconstrained conditions, but also more challenging."
],
"highlighted_evidence": [
"CN-Celeb specially focuses on Chinese celebrities, and contains more than $130,000$ utterances from $1,000$ persons.\n\nCN-Celeb covers more genres of speech. We intentionally collected data from 11 genres, including entertainment, interview, singing, play, movie, vlog, live broadcast, speech, drama, recitation and advertisement. The speech of a particular speaker may be in more than 5 genres. As a comparison, most of the utterances in VoxCeleb were extracted from interview videos. The diversity in genres makes our database more representative for the true scenarios in unconstrained conditions, but also more challenging."
]
}
],
"annotation_id": [
"d52158f81f0a690c7747aea82ced7b57c7f48c2b"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "genre, entertainment, interview, singing, play, movie, vlog, live broadcast, speech, drama, recitation and advertisement",
"evidence": [
"FLOAT SELECTED: Table 1. The distribution over genres."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 1. The distribution over genres."
]
}
],
"annotation_id": [
"2e6fa762aa2a37f00c418a565e35068d2f14dd6a"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"02fce27e075bf24c3867c3c0a4449bac4ef5b925"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "x-vector",
"evidence": [
"FLOAT SELECTED: Table 4. EER(%) results of the i-vector and x-vector systems trained on VoxCeleb and evaluated on three evaluation sets."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 4. EER(%) results of the i-vector and x-vector systems trained on VoxCeleb and evaluated on three evaluation sets."
]
}
],
"annotation_id": [
"28915bb2904719dec4e6f3fcc4426d758d76dde1"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "For i-vector system, performances are 11.75% inferior to voxceleb. For x-vector system, performances are 10.74% inferior to voxceleb",
"evidence": [
"FLOAT SELECTED: Table 4. EER(%) results of the i-vector and x-vector systems trained on VoxCeleb and evaluated on three evaluation sets."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 4. EER(%) results of the i-vector and x-vector systems trained on VoxCeleb and evaluated on three evaluation sets."
]
}
],
"annotation_id": [
"dcde763fd85294ed182df9966c9fdb8dca3ec7eb"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
} | {
"caption": [
"Table 2. The distribution over utterance length.",
"Table 1. The distribution over genres.",
"Table 3. Comparison between CN-Celeb and VoxCeleb.",
"Table 4. EER(%) results of the i-vector and x-vector systems trained on VoxCeleb and evaluated on three evaluation sets.",
"Table 5. EER(%) results with different data settings."
],
"file": [
"2-Table2-1.png",
"2-Table1-1.png",
"2-Table3-1.png",
"4-Table4-1.png",
"4-Table5-1.png"
]
} |
1812.06705 | Conditional BERT Contextual Augmentation | We propose a novel data augmentation method for labeled sentences called conditional BERT contextual augmentation. Data augmentation methods are often applied to prevent overfitting and improve generalization of deep neural network models. Recently proposed contextual augmentation augments labeled sentences by randomly replacing words with more varied substitutions predicted by language model. BERT demonstrates that a deep bidirectional language model is more powerful than either an unidirectional language model or the shallow concatenation of a forward and backward model. We retrofit BERT to conditional BERT by introducing a new conditional masked language model\footnote{The term"conditional masked language model"appeared once in original BERT paper, which indicates context-conditional, is equivalent to term"masked language model". In our paper,"conditional masked language model"indicates we apply extra label-conditional constraint to the"masked language model".} task. The well trained conditional BERT can be applied to enhance contextual augmentation. Experiments on six various different text classification tasks show that our method can be easily applied to both convolutional or recurrent neural networks classifier to obtain obvious improvement. | {
"section_name": [
"Introduction",
"Fine-tuning on Pre-trained Language Model",
"Text Data Augmentation",
"Preliminary: Masked Language Model Task",
"Conditional BERT",
"Conditional BERT Contextual Augmentation",
"Experiment",
"Datasets",
"Text classification",
"Connection to Style Transfer",
"Conclusions and Future Work"
],
"paragraphs": [
[
"Deep neural network-based models are easy to overfit and result in losing their generalization due to limited size of training data. In order to address the issue, data augmentation methods are often applied to generate more training samples. Recent years have witnessed great success in applying data augmentation in the field of speech area BIBREF0 , BIBREF1 and computer vision BIBREF2 , BIBREF3 , BIBREF4 . Data augmentation in these areas can be easily performed by transformations like resizing, mirroring, random cropping, and color shifting. However, applying these universal transformations to texts is largely randomized and uncontrollable, which makes it impossible to ensure the semantic invariance and label correctness. For example, given a movie review “The actors is good\", by mirroring we get “doog si srotca ehT\", or by random cropping we get “actors is\", both of which are meaningless.",
"Existing data augmentation methods for text are often loss of generality, which are developed with handcrafted rules or pipelines for specific domains. A general approach for text data augmentation is replacement-based method, which generates new sentences by replacing the words in the sentences with relevant words (e.g. synonyms). However, words with synonyms from a handcrafted lexical database likes WordNet BIBREF5 are very limited , and the replacement-based augmentation with synonyms can only produce limited diverse patterns from the original texts. To address the limitation of replacement-based methods, Kobayashi BIBREF6 proposed contextual augmentation for labeled sentences by offering a wide range of substitute words, which are predicted by a label-conditional bidirectional language model according to the context. But contextual augmentation suffers from two shortages: the bidirectional language model is simply shallow concatenation of a forward and backward model, and the usage of LSTM models restricts their prediction ability to a short range.",
"BERT, which stands for Bidirectional Encoder Representations from Transformers, pre-trained deep bidirectional representations by jointly conditioning on both left and right context in all layers. BERT addressed the unidirectional constraint by proposing a “masked language model\" (MLM) objective by masking some percentage of the input tokens at random, and predicting the masked words based on its context. This is very similar to how contextual augmentation predict the replacement words. But BERT was proposed to pre-train text representations, so MLM task is performed in an unsupervised way, taking no label variance into consideration.",
"This paper focuses on the replacement-based methods, by proposing a novel data augmentation method called conditional BERT contextual augmentation. The method applies contextual augmentation by conditional BERT, which is fine-tuned on BERT. We adopt BERT as our pre-trained language model with two reasons. First, BERT is based on Transformer. Transformer provides us with a more structured memory for handling long-term dependencies in text. Second, BERT, as a deep bidirectional model, is strictly more powerful than the shallow concatenation of a left-to-right and right-to left model. So we apply BERT to contextual augmentation for labeled sentences, by offering a wider range of substitute words predicted by the masked language model task. However, the masked language model predicts the masked word based only on its context, so the predicted word maybe incompatible with the annotated labels of the original sentences. In order to address this issue, we introduce a new fine-tuning objective: the \"conditional masked language model\"(C-MLM). The conditional masked language model randomly masks some of the tokens from an input, and the objective is to predict a label-compatible word based on both its context and sentence label. Unlike Kobayashi's work, the C-MLM objective allows a deep bidirectional representations by jointly conditioning on both left and right context in all layers. In order to evaluate how well our augmentation method improves performance of deep neural network models, following Kobayashi BIBREF6 , we experiment it on two most common neural network structures, LSTM-RNN and CNN, on text classification tasks. Through the experiments on six various different text classification tasks, we demonstrate that the proposed conditional BERT model augments sentence better than baselines, and conditional BERT contextual augmentation method can be easily applied to both convolutional or recurrent neural networks classifier. We further explore our conditional MLM task’s connection with style transfer task and demonstrate that our conditional BERT can also be applied to style transfer too.",
"Our contributions are concluded as follows:",
"To our best knowledge, this is the first attempt to alter BERT to a conditional BERT or apply BERT on text generation tasks."
],
[
"Language model pre-training has attracted wide attention and fine-tuning on pre-trained language model has shown to be effective for improving many downstream natural language processing tasks. Dai BIBREF7 pre-trained unlabeled data to improve Sequence Learning with recurrent networks. Howard BIBREF8 proposed a general transfer learning method, Universal Language Model Fine-tuning (ULMFiT), with the key techniques for fine-tuning a language model. Radford BIBREF9 proposed that by generative pre-training of a language model on a diverse corpus of unlabeled text, large gains on a diverse range of tasks could be realized. Radford BIBREF9 achieved large improvements on many sentence-level tasks from the GLUE benchmark BIBREF10 . BERT BIBREF11 obtained new state-of-the-art results on a broad range of diverse tasks. BERT pre-trained deep bidirectional representations which jointly conditioned on both left and right context in all layers, following by discriminative fine-tuning on each specific task. Unlike previous works fine-tuning pre-trained language model to perform discriminative tasks, we aim to apply pre-trained BERT on generative tasks by perform the masked language model(MLM) task. To generate sentences that are compatible with given labels, we retrofit BERT to conditional BERT, by introducing a conditional masked language model task and fine-tuning BERT on the task."
],
[
"Text data augmentation has been extensively studied in natural language processing. Sample-based methods includes downsampling from the majority classes and oversampling from the minority class, both of which perform weakly in practice. Generation-based methods employ deep generative models such as GANs BIBREF12 or VAEs BIBREF13 , BIBREF14 , trying to generate sentences from a continuous space with desired attributes of sentiment and tense. However, sentences generated in these methods are very hard to guarantee the quality both in label compatibility and sentence readability. In some specific areas BIBREF15 , BIBREF16 , BIBREF17 . word replacement augmentation was applied. Wang BIBREF18 proposed the use of neighboring words in continuous representations to create new instances for every word in a tweet to augment the training dataset. Zhang BIBREF19 extracted all replaceable words from the given text and randomly choose $r$ of them to be replaced, then substituted the replaceable words with synonyms from WordNet BIBREF5 . Kolomiyets BIBREF20 replaced only the headwords under a task-specific assumption that temporal trigger words usually occur as headwords. Kolomiyets BIBREF20 selected substitute words with top- $K$ scores given by the Latent Words LM BIBREF21 , which is a LM based on fixed length contexts. Fadaee BIBREF22 focused on the rare word problem in machine translation, replacing words in a source sentence with only rare words. A word in the translated sentence is also replaced using a word alignment method and a rightward LM. The work most similar to our research is Kobayashi BIBREF6 . Kobayashi used a fill-in-the-blank context for data augmentation by replacing every words in the sentence with language model. In order to prevent the generated words from reversing the information related to the labels of the sentences, Kobayashi BIBREF6 introduced a conditional constraint to control the replacement of words. Unlike previous works, we adopt a deep bidirectional language model to apply replacement, and the attention mechanism within our model allows a more structured memory for handling long-term dependencies in text, which resulting in more general and robust improvement on various downstream tasks."
],
[
"In general, the language model(LM) models the probability of generating natural language sentences or documents. Given a sequence $\\textbf {\\textit {S}}$ of N tokens, $<t_1,t_2,...,t_N>$ , a forward language model allows us to predict the probability of the sequence as: ",
"$$p(t_1,t_2,...,t_N) = \\prod _{i=1}^{N}p(t_i|t_1, t_2,..., t_{i-1}).$$ (Eq. 8) ",
"Similarly, a backward language model allows us to predict the probability of the sentence as: ",
"$$p(t_1,t_2,...,t_N) = \\prod _{i=1}^{N}p(t_i|t_{i+1}, t_{i+2},..., t_N).$$ (Eq. 9) ",
"Traditionally, a bidirectional language model a shallow concatenation of independently trained forward and backward LMs.",
"In order to train a deep bidirectional language model, BERT proposed Masked Language Model (MLM) task, which was also referred to Cloze Task BIBREF23 . MLM task randomly masks some percentage of the input tokens, and then predicts only those masked tokens according to their context. Given a masked token ${t_i}$ , the context is the tokens surrounding token ${t_i}$ in the sequence $\\textbf {\\textit {S}}$ , i.e. cloze sentence ${\\textbf {\\textit {S}}\\backslash \\lbrace t_i \\rbrace }$ . The final hidden vectors corresponding to the mask tokens are fed into an output softmax over the vocabulary to produce words with a probability distribution ${p(\\cdot |\\textbf {\\textit {S}}\\backslash \\lbrace t_i \\rbrace )}$ . MLM task only predicts the masked words rather than reconstructing the entire input, which suggests that more pre-training steps are required for the model to converge. Pre-trained BERT can augment sentences through MLM task, by predicting new words in masked positions according to their context."
],
[
"As shown in Fig 1 , our conditional BERT shares the same model architecture with the original BERT. The differences are the input representation and training procedure.",
"The input embeddings of BERT are the sum of the token embeddings, the segmentation embeddings and the position embeddings. For the segmentation embeddings in BERT, a learned sentence A embedding is added to every token of the first sentence, and if a second sentence exists, a sentence B embedding will be added to every token of the second sentence. However, the segmentation embeddings has no connection to the actual annotated labels of a sentence, like sense, sentiment or subjectivity, so predicted word is not always compatible with annotated labels. For example, given a positive movie remark “this actor is good\", we have the word “good\" masked. Through the Masked Language Model task by BERT, the predicted word in the masked position has potential to be negative word likes \"bad\" or \"boring\". Such new generated sentences by substituting masked words are implausible with respect to their original labels, which will be harmful if added to the corpus to apply augmentation. In order to address this issue, we propose a new task: “conditional masked language model\".",
"The conditional masked language model randomly masks some of the tokens from the labeled sentence, and the objective is to predict the original vocabulary index of the masked word based on both its context and its label. Given a masked token ${t_i}$ , the context ${\\textbf {\\textit {S}}\\backslash \\lbrace t_i \\rbrace }$ and label ${y}$ are both considered, aiming to calculate ${p(\\cdot |y,\\textbf {\\textit {S}}\\backslash \\lbrace t_i \\rbrace )}$ , instead of calculating ${p(\\cdot |\\textbf {\\textit {S}}\\backslash \\lbrace t_i \\rbrace )}$ . Unlike MLM pre-training, the conditional MLM objective allows the representation to fuse the context information and the label information, which allows us to further train a label-conditional deep bidirectional representations.",
"To perform conditional MLM task, we fine-tune on pre-trained BERT. We alter the segmentation embeddings to label embeddings, which are learned corresponding to their annotated labels on labeled datasets. Note that the BERT are designed with segmentation embedding being embedding A or embedding B, so when a downstream task dataset with more than two labels, we have to adapt the size of embedding to label size compatible. We train conditional BERT using conditional MLM task on labeled dataset. After the model has converged, it is expected to be able to predict words in masked position both considering the context and the label."
],
[
"After the conditional BERT is well-trained, we utilize it to augment sentences. Given a labeled sentence from the corpus, we randomly mask a few words in the sentence. Through conditional BERT, various words compatibly with the label of the sentence are predicted by conditional BERT. After substituting the masked words with predicted words, a new sentences is generated, which shares similar context and same label with original sentence. Then new sentences are added to original corpus. We elaborate the entire process in algorithm \"Conditional BERT Contextual Augmentation\" .",
"Conditional BERT contextual augmentation algorithm. Fine-tuning on the pre-trained BERT , we retrofit BERT to conditional BERT using conditional MLM task on labeled dataset. After the model converged, we utilize it to augment sentences. New sentences are added into dataset to augment the dataset. [1] Alter the segmentation embeddings to label embeddings Fine-tune the pre-trained BERT using conditional MLM task on labeled dataset D until convergence each iteration i=1,2,...,M Sample a sentence $s$ from D Randomly mask $k$ words Using fine-tuned conditional BERT to predict label-compatible words on masked positions to generate a new sentence $S^{\\prime }$ Add new sentences into dataset $D$ to get augmented dataset $D^{\\prime }$ Perform downstream task on augmented dataset $D^{\\prime }$ "
],
[
"In this section, we present conditional BERT parameter settings and, following Kobayashi BIBREF6 , we apply different augmentation methods on two types of neural models through six text classification tasks. The pre-trained BERT model we used in our experiment is BERT $_{BASE}$ , with number of layers (i.e., Transformer blocks) $L = 12$ , the hidden size $ H = 768$ , and the number of self-attention heads $A = 12$ , total parameters $= 110M$ . Detailed pre-train parameters setting can be found in original paper BIBREF11 . For each task, we perform the following steps independently. First, we evaluate the augmentation ability of original BERT model pre-trained on MLM task. We use pre-trained BERT to augment dataset, by predicted masked words only condition on context for each sentence. Second, we fine-tune the original BERT model to a conditional BERT. Well-trained conditional BERT augments each sentence in dataset by predicted masked words condition on both context and label. Third, we compare the performance of the two methods with Kobayashi's BIBREF6 contextual augmentation results. Note that the original BERT’s segmentation embeddings layer is compatible with two-label dataset. When the task-specific dataset is with more than two different labels, we should re-train a label size compatible label embeddings layer instead of directly fine-tuning the pre-trained one."
],
[
"Six benchmark classification datasets are listed in table 1 . Following Kim BIBREF24 , for a dataset without validation data, we use 10% of its training set for the validation set. Summary statistics of six classification datasets are shown in table 1.",
"SST BIBREF25 SST (Stanford Sentiment Treebank) is a dataset for sentiment classification on movie reviews, which are annotated with five labels (SST5: very positive, positive, neutral, negative, or very negative) or two labels (SST2: positive or negative).",
"Subj BIBREF26 Subj (Subjectivity dataset) is annotated with whether a sentence is subjective or objective.",
"MPQA BIBREF27 MPQA Opinion Corpus is an opinion polarity detection dataset of short phrases rather than sentences, which contains news articles from a wide variety of news sources manually annotated for opinions and other private states (i.e., beliefs, emotions, sentiments, speculations, etc.).",
"RT BIBREF28 RT is another movie review sentiment dataset contains a collection of short review excerpts from Rotten Tomatoes collected by Bo Pang and Lillian Lee.",
"TREC BIBREF29 TREC is a dataset for classification of the six question types (whether the question is about person, location, numeric information, etc.)."
],
[
"We evaluate the performance improvement brought by conditional BERT contextual augmentation on sentence classification tasks, so we need to prepare two common sentence classifiers beforehand. For comparison, following Kobayashi BIBREF6 , we adopt two typical classifier architectures: CNN or LSTM-RNN. The CNN-based classifier BIBREF24 has convolutional filters of size 3, 4, 5 and word embeddings. All outputs of each filter are concatenated before applied with a max-pooling over time, then fed into a two-layer feed-forward network with ReLU, followed by the softmax function. An RNN-based classifier has a single layer LSTM and word embeddings, whose output is fed into an output affine layer with the softmax function. For both the architectures, dropout BIBREF30 and Adam optimization BIBREF31 are applied during training. The train process is finish by early stopping with validation at each epoch.",
"Sentence classifier hyper-parameters including learning rate, embedding dimension, unit or filter size, and dropout ratio, are selected using grid-search for each task-specific dataset. We refer to Kobayashi's implementation in the released code. For BERT, all hyper-parameters are kept the same as Devlin BIBREF11 , codes in Tensorflow and PyTorch are all available on github and pre-trained BERT model can also be downloaded. The number of conditional BERT training epochs ranges in [1-50] and number of masked words ranges in [1-2].",
"We compare the performance improvements obtained by our proposed method with the following baseline methods, “w/\" means “with\":",
"w/synonym: Words are randomly replaced with synonyms from WordNet BIBREF5 .",
"w/context: Proposed by Kobayashi BIBREF6 , which used a bidirectional language model to apply contextual augmentation, each word was replaced with a probability.",
"w/context+label: Kobayashi’s contextual augmentation method BIBREF6 in a label-conditional LM architecture.",
"Table 2 lists the accuracies of the all methods on two classifier architectures. The results show that, for various datasets on different classifier architectures, our conditional BERT contextual augmentation improves the model performances most. BERT can also augments sentences to some extent, but not as much as conditional BERT does. For we masked words randomly, the masked words may be label-sensitive or label-insensitive. If label-insensitive words are masked, words predicted through BERT may not be compatible with original labels. The improvement over all benchmark datasets also shows that conditional BERT is a general augmentation method for multi-labels sentence classification tasks.",
"We also explore the effect of number of training steps to the performance of conditional BERT data augmentation. The fine-tuning epoch setting ranges in [1-50], we list the fine-tuning epoch of conditional BERT to outperform BERT for various benchmarks in table 3 . The results show that our conditional BERT contextual augmentation can achieve obvious performance improvement after only a few fine-tuning epochs, which is very convenient to apply to downstream tasks."
],
[
"In this section, we further deep into the connection to style transfer and apply our well trained conditional BERT to style transfer task. Style transfer is defined as the task of rephrasing the text to contain specific stylistic properties without changing the intent or affect within the context BIBREF32 . Our conditional MLM task changes words in the text condition on given label without changing the context. View from this point, the two tasks are very close. So in order to apply conditional BERT to style transfer task, given a specific stylistic sentence, we break it into two steps: first, we find the words relevant to the style; second, we mask the style-relevant words, then use conditional BERT to predict new substitutes with sentence context and target style property. In order to find style-relevant words in a sentence, we refer to Xu BIBREF33 , which proposed an attention-based method to extract the contribution of each word to the sentence sentimental label. For example, given a positive movie remark “This movie is funny and interesting\", we filter out the words contributes largely to the label and mask them. Then through our conditional BERT contextual augmentation method, we fill in the masked position by predicting words conditioning on opposite label and sentence context, resulting in “This movie is boring and dull\". The words “boring\" and “dull\" contribute to the new sentence being labeled as negative style. We sample some sentences from dataset SST2, transferring them to the opposite label, as listed in table 4 ."
],
[
"In this paper, we fine-tune BERT to conditional BERT by introducing a novel conditional MLM task. After being well trained, the conditional BERT can be applied to data augmentation for sentence classification tasks. Experiment results show that our model outperforms several baseline methods obviously. Furthermore, we demonstrate that our conditional BERT can also be applied to style transfer task. In the future, (1)We will explore how to perform text data augmentation on imbalanced datasets with pre-trained language model, (2) we believe the idea of conditional BERT contextual augmentation is universal and will be applied to paragraph or document level data augmentation."
]
]
} | {
"question": [
"On what datasets is the new model evaluated on?",
"How do the authors measure performance?",
"Does the new objective perform better than the original objective bert is trained on?",
"Are other pretrained language models also evaluated for contextual augmentation? ",
"Do the authors report performance of conditional bert on tasks without data augmentation?"
],
"question_id": [
"df8cc1f395486a12db98df805248eb37c087458b",
"6e97c06f998f09256be752fa75c24ba853b0db24",
"de2d33760dc05f9d28e9dabc13bab2b3264cadb7",
"63bb39fd098786a510147f8ebc02408de350cb7c",
"6333845facb22f862ffc684293eccc03002a4830"
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"topic_background": [
"research",
"research",
"research",
"research",
"familiar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"BERT",
"BERT",
"BERT",
"BERT",
"BERT"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"SST (Stanford Sentiment Treebank)",
"Subj (Subjectivity dataset)",
"MPQA Opinion Corpus",
"RT is another movie review sentiment dataset",
"TREC is a dataset for classification of the six question types"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"SST BIBREF25 SST (Stanford Sentiment Treebank) is a dataset for sentiment classification on movie reviews, which are annotated with five labels (SST5: very positive, positive, neutral, negative, or very negative) or two labels (SST2: positive or negative).",
"Subj BIBREF26 Subj (Subjectivity dataset) is annotated with whether a sentence is subjective or objective.",
"MPQA BIBREF27 MPQA Opinion Corpus is an opinion polarity detection dataset of short phrases rather than sentences, which contains news articles from a wide variety of news sources manually annotated for opinions and other private states (i.e., beliefs, emotions, sentiments, speculations, etc.).",
"RT BIBREF28 RT is another movie review sentiment dataset contains a collection of short review excerpts from Rotten Tomatoes collected by Bo Pang and Lillian Lee.",
"TREC BIBREF29 TREC is a dataset for classification of the six question types (whether the question is about person, location, numeric information, etc.).",
"FLOAT SELECTED: Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. “w/” represents “with”, lines marked with “*” are experiments results from Kobayashi(Kobayashi, 2018)."
],
"highlighted_evidence": [
"SST BIBREF25 SST (Stanford Sentiment Treebank) is a dataset for sentiment classification on movie reviews, which are annotated with five labels (SST5: very positive, positive, neutral, negative, or very negative) or two labels (SST2: positive or negative).\n\nSubj BIBREF26 Subj (Subjectivity dataset) is annotated with whether a sentence is subjective or objective.\n\nMPQA BIBREF27 MPQA Opinion Corpus is an opinion polarity detection dataset of short phrases rather than sentences, which contains news articles from a wide variety of news sources manually annotated for opinions and other private states (i.e., beliefs, emotions, sentiments, speculations, etc.).\n\nRT BIBREF28 RT is another movie review sentiment dataset contains a collection of short review excerpts from Rotten Tomatoes collected by Bo Pang and Lillian Lee.\n\nTREC BIBREF29 TREC is a dataset for classification of the six question types (whether the question is about person, location, numeric information, etc.).",
"FLOAT SELECTED: Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. “w/” represents “with”, lines marked with “*” are experiments results from Kobayashi(Kobayashi, 2018)."
]
}
],
"annotation_id": [
"da6a68609a4ef853fbdc85494dbb628978a9d63d"
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Accuracy across six datasets",
"evidence": [
"FLOAT SELECTED: Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. “w/” represents “with”, lines marked with “*” are experiments results from Kobayashi(Kobayashi, 2018)."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. “w/” represents “with”, lines marked with “*” are experiments results from Kobayashi(Kobayashi, 2018)."
]
}
],
"annotation_id": [
"3d4d56e4c3dcfc684bf56a1af8d6c3d0e94ab405"
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"Table 2 lists the accuracies of the all methods on two classifier architectures. The results show that, for various datasets on different classifier architectures, our conditional BERT contextual augmentation improves the model performances most. BERT can also augments sentences to some extent, but not as much as conditional BERT does. For we masked words randomly, the masked words may be label-sensitive or label-insensitive. If label-insensitive words are masked, words predicted through BERT may not be compatible with original labels. The improvement over all benchmark datasets also shows that conditional BERT is a general augmentation method for multi-labels sentence classification tasks."
],
"highlighted_evidence": [
"The improvement over all benchmark datasets also shows that conditional BERT is a general augmentation method for multi-labels sentence classification tasks."
]
}
],
"annotation_id": [
"5dc1d75b5817b4b29cadcfe5da1b8796e3482fe5"
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"FLOAT SELECTED: Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. “w/” represents “with”, lines marked with “*” are experiments results from Kobayashi(Kobayashi, 2018)."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. “w/” represents “with”, lines marked with “*” are experiments results from Kobayashi(Kobayashi, 2018)."
]
}
],
"annotation_id": [
"09963269da86b53287634c76b47ecf335c9ce1d1"
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"Table 2 lists the accuracies of the all methods on two classifier architectures. The results show that, for various datasets on different classifier architectures, our conditional BERT contextual augmentation improves the model performances most. BERT can also augments sentences to some extent, but not as much as conditional BERT does. For we masked words randomly, the masked words may be label-sensitive or label-insensitive. If label-insensitive words are masked, words predicted through BERT may not be compatible with original labels. The improvement over all benchmark datasets also shows that conditional BERT is a general augmentation method for multi-labels sentence classification tasks."
],
"highlighted_evidence": [
"Table 2 lists the accuracies of the all methods on two classifier architectures. The results show that, for various datasets on different classifier architectures, our conditional BERT contextual augmentation improves the model performances most. BERT can also augments sentences to some extent, but not as much as conditional BERT does."
]
}
],
"annotation_id": [
"033ab0c50e8d68b359f9fb259227becc14b5e942"
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
}
]
} | {
"caption": [
"Figure 1: Model architecture of conditional BERT. The label embeddings in conditional BERT corresponding to segmentation embeddings in BERT, but their functions are different.",
"Table 1: Summary statistics for the datasets after tokenization. c: Number of target classes. l: Average sentence length. N : Dataset size. |V |: Vocabulary size. Test: Test set size (CV means there was no standard train/test split and thus 10-fold cross-validation was used).",
"Table 2: Accuracies of different methods for various benchmarks on two classifier architectures. CBERT, which represents conditional BERT, performs best on two classifier structures over six datasets. “w/” represents “with”, lines marked with “*” are experiments results from Kobayashi(Kobayashi, 2018).",
"Table 3: Fine-tuning epochs of conditional BERT to outperform BERT for various benchmarks",
"Table 4: Examples generated by conditional BERT on the SST2 dataset. To perform style transfer, we reverse the original label of a sentence, and conditional BERT output a new label compatible sentence."
],
"file": [
"5-Figure1-1.png",
"5-Table1-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Table4-1.png"
]
} |