doc_id
stringlengths
4
10
revision_depth
int64
1
4
before_revision
stringlengths
135
9.03k
after_revision
stringlengths
144
8.89k
edit_actions
list
sents_char_pos
sequence
domain
stringclasses
3 values
1912.05372
1
Language models have become a key step to achieve state-of-the-art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their contextualization at the sentence level. This has been widely demonstrated for English using contextualized word representations such as OpenAI GPT (Radford et al., 2018), BERT (Devlin et al., 2019), or XLNet (Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We apply our French language models to complex NLP tasks (natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches. Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks are shared with the research community for further reproducible experiments in French NLP.
Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their contextualization at the sentence level. This has been widely demonstrated for English using contextualized word representations such as OpenAI GPT (Radford et al., 2018), BERT (Devlin et al., 2019), or XLNet (Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We apply our French language models to complex NLP tasks (natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches. Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks , called FLUE (French Language Understanding Evaluation), are shared to the research community for further reproducible experiments in French NLP.
[ { "type": "R", "before": "state-of-the-art", "after": "state-of-the art", "start_char_pos": 50, "end_char_pos": 66, "major_intent": "fluency", "raw_intents": [ "fluency", "clarity", "fluency" ] }, { "type": "R", "before": "are shared with", "after": ", called FLUE (French Language Understanding Evaluation), are shared to", "start_char_pos": 1109, "end_char_pos": 1124, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 133, 378, 568, 681, 811, 1011 ]
arxiv
1912.05372
2
Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their contextualization at the sentence level. This has been widely demonstrated for English using contextualized word representations such as OpenAI GPT (Radford et al., 2018 ), BERT ( Devlin et al., 2019 ), or XLNet ( Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We apply our French language models to complex NLP tasks ( natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches. Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks, called FLUE (French Language Understanding Evaluation), are shared to the research community for further reproducible experiments in French NLP.
Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their contextualization at the sentence level. This has been widely demonstrated for English using contextualized representations (Dai and Le, 2015; Peters et al., 2018 ; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2019 ; Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We apply our French language models to diverse NLP tasks ( text classification, paraphrasing, natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pre-training approaches. Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks, called FLUE (French Language Understanding Evaluation), are shared to the research community for further reproducible experiments in French NLP.
[ { "type": "R", "before": "word representations such as OpenAI GPT (Radford", "after": "representations (Dai and Le, 2015; Peters", "start_char_pos": 446, "end_char_pos": 494, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "), BERT (", "after": "; Howard and Ruder, 2018; Radford et al., 2018;", "start_char_pos": 508, "end_char_pos": 517, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "), or XLNet (", "after": ";", "start_char_pos": 538, "end_char_pos": 551, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "complex", "after": "diverse", "start_char_pos": 855, "end_char_pos": 862, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "text classification, paraphrasing,", "start_char_pos": 875, "end_char_pos": 875, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 133, 378, 572, 685, 815, 1017 ]
arxiv
1912.10514
2
An effective method to generate a large number of parallel sentences for training improved neural machine translation (NMT) systems is the use of back-translations of the target-side monolingual data. The method was not able to utilize the available huge amount of monolingual data because of the inability of models to differentiate between the authentic and synthetic parallel data . Tagging, or using gates, has been used to enable translation models to distinguish between synthetic and authentic data, improving standard back-translation and also enabling the use of iterative back-translation on language pairs that under-performed using standard back-translation. This workpresents pre-training and fine-tuning as a simplified but more effective approach of differentiating between the two data. The approach - tag-less back-translation - trains the model on the synthetic data and fine-tunes it on the authentic data. Experiments have shown the approach to outperform the baseline and standard back-translation by 4.0 and 0.7 BLEU respectively on low resource English-Vietnamese NMT. While the need for tagging (noising) the dataset has been removed, the technique outperformed tagged back-translation by 0.4 BLEU. The approach reached the best scores in less training time than the standard and tagged back-translation approaches .
An effective method to generate a large number of parallel sentences for training improved neural machine translation (NMT) systems is the use of the back-translations of the target-side monolingual data. The standard back-translation method has been shown to be unable to efficiently utilize the available huge amount of existing monolingual data because of the inability of translation models to differentiate between the authentic and synthetic parallel data during training . Tagging, or using gates, has been used to enable translation models to distinguish between synthetic and authentic data, improving standard back-translation and also enabling the use of iterative back-translation on language pairs that underperformed using standard back-translation. In this work, we approach back-translation as a domain adaptation problem, eliminating the need for explicit tagging. In the approach --tag-less back-translation -- the synthetic and authentic parallel data are treated as out-of-domain and in-domain data respectively and, through pre-training and fine-tuning, the translation model is shown to be able to learn more efficiently from them during training. Experimental results have shown that the approach outperforms the standard and tagged back-translation approaches on low resource English-Vietnamese and English-German neural machine translation .
[ { "type": "A", "before": null, "after": "the", "start_char_pos": 146, "end_char_pos": 146, "major_intent": "fluency", "raw_intents": [ "clarity", "fluency", "fluency" ] }, { "type": "R", "before": "method was not able to", "after": "standard back-translation method has been shown to be unable to efficiently", "start_char_pos": 206, "end_char_pos": 228, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "style" ] }, { "type": "A", "before": null, "after": "existing", "start_char_pos": 266, "end_char_pos": 266, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "A", "before": null, "after": "translation", "start_char_pos": 312, "end_char_pos": 312, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "A", "before": null, "after": "during training", "start_char_pos": 387, "end_char_pos": 387, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "under-performed", "after": "underperformed", "start_char_pos": 626, "end_char_pos": 641, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "This workpresents", "after": "In this work, we approach back-translation as a domain adaptation problem, eliminating the need for explicit tagging. In the approach --", "start_char_pos": 675, "end_char_pos": 692, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "coherence", "meaning-changed" ] }, { "type": "A", "before": null, "after": "tag-less back-translation", "start_char_pos": 692, "end_char_pos": 692, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "-- the synthetic and authentic parallel data are treated as out-of-domain and in-domain data respectively and, through", "start_char_pos": 693, "end_char_pos": 693, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "fine-tuning as a simplified but more effective approach of differentiating between the two data. The approach - tag-less", "after": "fine-tuning, the translation model is shown to be able to learn more efficiently from them during training. Experimental results have shown that the approach outperforms the standard and tagged", "start_char_pos": 711, "end_char_pos": 831, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "- trains the model on the synthetic data and fine-tunes it on the authentic data. Experiments have shown the approach to outperform the baseline and standard back-translation by 4.0 and 0.7 BLEU respectively", "after": "approaches", "start_char_pos": 849, "end_char_pos": 1056, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "NMT. While the need for tagging (noising) the dataset has been removed, the technique outperformed tagged back-translation by 0.4 BLEU. The approach reached the best scores in less training time than the standard and tagged back-translation approaches", "after": "and English-German neural machine translation", "start_char_pos": 1092, "end_char_pos": 1343, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] } ]
[ 0, 201, 389, 674, 807, 930, 1227 ]
arxiv
1912.10616
1
Authorship attribution is the process of identifying the author of a text. Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set. While deep learning methods have been applied to classification-based approaches, current similarity-based methods only embody static notions of similarity. Siamese networks have been used to develop learned notions of similarity in one-shot image tasks, and also for tasks of semantic relatedness in NLP. We examine their application to the stylistic task of authorship attribution , and show that they can substantially outperform both classification- and existing similarity-based approaches on datasets with large numbers of authors .
Authorship attribution is the process of identifying the author of a text. Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set. While deep learning methods have been applied to classification-based approaches, applications to similarity-based applications have been limited, and most similarity-based methods only embody static notions of similarity. Siamese networks have been used to develop learned notions of similarity in one-shot image tasks, and also for tasks of mostly semantic relatedness in NLP. We examine their application to the stylistic task of authorship attribution on datasets with large numbers of authors, looking at multiple energy functions and neural network architectures, and show that they can substantially outperform both classification- and existing similarity-based approaches . We also find an unexpected relationship between choice of energy function and number of authors, in terms of performance .
[ { "type": "R", "before": "current", "after": "applications to", "start_char_pos": 358, "end_char_pos": 365, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "applications have been limited, and most similarity-based", "start_char_pos": 383, "end_char_pos": 383, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "A", "before": null, "after": "mostly", "start_char_pos": 554, "end_char_pos": 554, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": ", and", "after": "on datasets with large numbers of authors, looking at multiple energy functions and neural network architectures, and", "start_char_pos": 661, "end_char_pos": 666, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "on datasets with large numbers of authors", "after": ". We also find an unexpected relationship between choice of energy function and number of authors, in terms of performance", "start_char_pos": 773, "end_char_pos": 814, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 74, 275, 433, 583 ]
arxiv
1912.10616
2
Authorship attribution is the process of identifying the author of a text. Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set . While deep learning methodshave been applied to classification-based approaches, applications to similarity-based applications have been limited, and most similarity-based methods only embody static notions of similarity . Siamese networks have been used to develop learned notions of similarity in one-shot image tasks, and also for tasks of mostly semantic relatedness in NLP. We examine their application to the stylistic task of authorship attribution on datasets with large numbers of authors, looking at multiple energy functions and neural network architectures, and show that they can substantially outperform both classification- and existing similarity-based approaches. We also find an unexpected relationship between choice of energy function and number of authors, in terms of performance .
Authorship attribution is the process of identifying the author of a text. Approaches to tackling it have been conventionally divided into classification-based ones, which work well for small numbers of candidate authors, and similarity-based methods , which are applicable for larger numbers of authors or for authors beyond the training set ; these existing similarity-based methods have only embodied static notions of similarity. Deep learning methods, which blur the boundaries between classification-based and similarity-based approaches, are promising in terms of ability to learn a notion of similarity, but have previously only been used in a conventional small-closed-class classification setup . Siamese networks have been used to develop learned notions of similarity in one-shot image tasks, and also for tasks of mostly semantic relatedness in NLP. We examine their application to the stylistic task of authorship attribution on datasets with large numbers of authors, looking at multiple energy functions and neural network architectures, and show that they can substantially outperform previous approaches .
[ { "type": "R", "before": "Classification-based approaches", "after": "Approaches to tackling it have been conventionally divided into classification-based ones, which", "start_char_pos": 75, "end_char_pos": 106, "major_intent": "coherence", "raw_intents": [ "clarity", "coherence", "coherence" ] }, { "type": "R", "before": "but only", "after": "and", "start_char_pos": 157, "end_char_pos": 165, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "A", "before": null, "after": ", which", "start_char_pos": 191, "end_char_pos": 191, "major_intent": "coherence", "raw_intents": [ "coherence", "fluency", "coherence" ] }, { "type": "R", "before": ". While deep learning methodshave been applied to classification-based approaches, applications to", "after": "; these existing", "start_char_pos": 276, "end_char_pos": 374, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "applications have been limited, and most similarity-based methods only embody static notions of similarity", "after": "methods have only embodied static notions of similarity. Deep learning methods, which blur the boundaries between classification-based and similarity-based approaches, are promising in terms of ability to learn a notion of similarity, but have previously only been used in a conventional small-closed-class classification setup", "start_char_pos": 392, "end_char_pos": 498, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "both classification- and existing similarity-based approaches. We also find an unexpected relationship between choice of energy function and number of authors, in terms of performance", "after": "previous approaches", "start_char_pos": 896, "end_char_pos": 1079, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 74, 277, 500, 656, 958 ]
arxiv
1912.11602
1
Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information. While many algorithms exploit this fact in summary generation, it has a detrimental effect on teaching the model to discriminate and extract important information . We propose that the lead bias can be leveraged in a simple and effective way in our favor to pretrain abstractive news summarization models on large-scale unlabeled corpus : predicting the leading sentences using the rest of an article. Via careful data cleaning and filtering , our transformer-based pretrained model without any finetuning achieves remarkable results over various news summarization tasks. With further finetuning, our model outperforms many competitive baseline models. Human evaluations further show the effectiveness of our method .
Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information. While many algorithms exploit this fact in summary generation, it has a detrimental effect on teaching the model to discriminate and extract important information in general . We propose that the lead bias can be leveraged in our favor in a simple and effective way to pre-train abstractive news summarization models on large-scale unlabeled news corpora : predicting the leading sentences using the rest of an article. We collect a massive news corpus and conduct data cleaning and filtering via statistical analysis. We then apply the proposed self-supervised pre-training to existing generation models BART and T5 for domain adaptation. Via extensive experiments on six benchmark datasets, we show that this approach can dramatically improve the summarization quality and achieve state-of-the-art results for zero-shot news summarization without any fine-tuning. For example, in the DUC2003 dataset, the ROUGE-1 score of BART increases 13.7\% after the lead-bias pre-training. We deploy the model in Microsoft News and provide public APIs as well as a demo website for multi-lingual news summarization .
[ { "type": "A", "before": null, "after": "in general", "start_char_pos": 295, "end_char_pos": 295, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "A", "before": null, "after": "our favor in", "start_char_pos": 348, "end_char_pos": 348, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "in our favor to pretrain", "after": "to pre-train", "start_char_pos": 376, "end_char_pos": 400, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "R", "before": "corpus", "after": "news corpora", "start_char_pos": 464, "end_char_pos": 470, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "Via careful", "after": "We collect a massive news corpus and conduct", "start_char_pos": 536, "end_char_pos": 547, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": ", our transformer-based pretrained model without any finetuning achieves remarkable results over various news summarization tasks. With further finetuning, our model outperforms many competitive baseline models. Human evaluations further show the effectiveness of our method", "after": "via statistical analysis. We then apply the proposed self-supervised pre-training to existing generation models BART and T5 for domain adaptation. Via extensive experiments on six benchmark datasets, we show that this approach can dramatically improve the summarization quality and achieve state-of-the-art results for zero-shot news summarization without any fine-tuning. For example, in the DUC2003 dataset, the ROUGE-1 score of BART increases 13.7\\% after the lead-bias pre-training. We deploy the model in Microsoft News and provide public APIs as well as a demo website for multi-lingual news summarization", "start_char_pos": 576, "end_char_pos": 850, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 131, 297, 535, 706, 787 ]
null
1912.11602
2
Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information . While many algorithms exploit this fact in summary generation , it has a detrimental effect on teaching the model to discriminate and extract important information in general. We propose that the lead bias can be leveraged in our favor in a simple and effective way to pre-train abstractive news summarization models on large-scale unlabeled news corpora: predicting the leading sentences using the rest of an article. We collect a massive news corpus and conduct data cleaning and filtering via statistical analysis. We then apply the proposed self-supervised pre-training to existing generation models BART and T5 for domain adaptation. Via extensive experiments on six benchmark datasets, we show that this approach can dramatically improve the summarization quality and achieve state-of-the-art results for zero-shot news summarization without any fine-tuning. For example, in the DUC2003 dataset, the ROUGE-1 score of BART increases 13.7\% after the lead-bias pre-training. We deploy the model in Microsoft News and provide public APIs as well as a demo website for multi-lingual news summarization.
A typical journalistic convention in news articles is to deliver the most salient information in the beginning, also known as the lead bias. While this phenomenon can be exploited in generating a summary , it has a detrimental effect on teaching a model to discriminate and extract important information in general. We propose that this lead bias can be leveraged in our favor in a simple and effective way to pre-train abstractive news summarization models on large-scale unlabeled news corpora: predicting the leading sentences using the rest of an article. We collect a massive news corpus and conduct data cleaning and filtering via statistical analysis. We then apply self-supervised pre-training on this dataset to existing generation models BART and T5 for domain adaptation. Via extensive experiments on six benchmark datasets, we show that this approach can dramatically improve the summarization quality and achieve state-of-the-art results for zero-shot news summarization without any fine-tuning. For example, in the DUC2003 dataset, the ROUGE-1 score of BART increases 13.7\% after the lead-bias pre-training. We deploy the model in Microsoft News and provide public APIs as well as a demo website for multi-lingual news summarization.
[ { "type": "R", "before": "Lead bias is a common phenomenon in news summarization, where early parts of an article often contain", "after": "A typical journalistic convention in news articles is to deliver", "start_char_pos": 0, "end_char_pos": 101, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": ". While many algorithms exploit this fact in summary generation", "after": "in the beginning, also known as the lead bias. While this phenomenon can be exploited in generating a summary", "start_char_pos": 131, "end_char_pos": 194, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "R", "before": "the", "after": "a", "start_char_pos": 237, "end_char_pos": 240, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] }, { "type": "R", "before": "the", "after": "this", "start_char_pos": 325, "end_char_pos": 328, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] }, { "type": "D", "before": "the proposed", "after": null, "start_char_pos": 665, "end_char_pos": 677, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "A", "before": null, "after": "on this dataset", "start_char_pos": 707, "end_char_pos": 707, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] } ]
[ 0, 132, 308, 551, 650, 772, 998, 1112 ]
arxiv
1912.13318
1
Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the wide spread of pre-training models for NLP applications, they almost focused on text-level manipulation, while neglecting the layout and style information that is vital for document image understanding. In this paper, we propose textbf LayoutLM to jointly model the interaction between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents. We also leverage the image features to incorporate the style information of words in LayoutLM. To the best of our knowledge, this is the first time that text and layout are jointly learned in a single framework for document-level pre-training , leading to significant performance improvement in downstream tasks for document image understanding.
Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the widespread of pre-training models for NLP applications, they almost focused on text-level manipulation, while neglecting the layout and style information that is vital for document image understanding. In this paper, we propose the LayoutLM to jointly model the interaction between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents. Furthermore, we also leverage the image features to incorporate the visual information of words into LayoutLM. To the best of our knowledge, this is the first time that text and layout are jointly learned in a single framework for document-level pre-training . It achieves new state-of-the-art results in several downstream tasks, including receipt understanding (from 94.02 to 95.24) and document image classification (from 93.07 to 94.42). The code and pre-trained LayoutLM models will be available soon at URL
[ { "type": "R", "before": "wide spread", "after": "widespread", "start_char_pos": 111, "end_char_pos": 122, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "D", "before": "textbf", "after": null, "start_char_pos": 340, "end_char_pos": 346, "major_intent": "others", "raw_intents": [ "others", "others", "others" ] }, { "type": "R", "before": "LayoutLM", "after": "the LayoutLM", "start_char_pos": 347, "end_char_pos": 355, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "We", "after": "Furthermore, we", "start_char_pos": 600, "end_char_pos": 602, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "R", "before": "style", "after": "visual", "start_char_pos": 655, "end_char_pos": 660, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "in", "after": "into", "start_char_pos": 682, "end_char_pos": 684, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": ", leading to significant performance improvement in downstream tasks for document image understanding.", "after": ". It achieves new state-of-the-art results in several downstream tasks, including receipt understanding (from 94.02 to 95.24) and document image classification (from 93.07 to 94.42). The code and pre-trained LayoutLM models will be available soon at URL", "start_char_pos": 843, "end_char_pos": 945, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 98, 313, 599, 694 ]
arxiv
1912.13318
2
Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the widespread of pre-training models for NLP applications, they almost focused on text-level manipulation, while neglecting the layout and style information that is vital for document image understanding. In this paper, we propose the LayoutLM to jointly model the interaction between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents. Furthermore, we also leverage the image features to incorporate the visual information of words into LayoutLM. To the best of our knowledge, this is the first time that text and layout are jointly learned in a single framework for document-level pre-training. It achieves new state-of-the-art results in several downstream tasks, including receipt understanding (from 94.02 to 95.24) and document image classification (from 93.07 to 94.42). The code and pre-trained LayoutLM models will be available soon at URL
Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the widespread of pre-training models for NLP applications, they almost focused on text-level manipulation, while neglecting the layout and style information that is vital for document image understanding. In this paper, we propose the LayoutLM to jointly model the interaction between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents. Furthermore, we also leverage the image features to incorporate the visual information of words into LayoutLM. To the best of our knowledge, this is the first time that text and layout are jointly learned in a single framework for document-level pre-training. It achieves new state-of-the-art results in several downstream tasks, including form understanding (from 70.72 to 79.27), receipt understanding (from 94.02 to 95.24) and document image classification (from 93.07 to 94.42). The code and pre-trained LayoutLM models are publicly available at URL
[ { "type": "A", "before": null, "after": "form understanding (from 70.72 to 79.27),", "start_char_pos": 936, "end_char_pos": 936, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "will be available soon", "after": "are publicly available", "start_char_pos": 1079, "end_char_pos": 1101, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 98, 312, 595, 706, 855, 1037 ]
arxiv
2001.00059
2
Recent research has achieved impressive results on understanding and improving source code by building up on machine-learning techniques developed for natural languages. A significant advancement in natural-language understanding has come with the development of pre-trained contextual embeddings, such as BERT, which can be fine-tuned for downstream tasks with less labeled data and training budget, while achieving better accuracies. However, there is no attempt yet to obtain a high-quality contextual embedding of source code, and to evaluate it on multiple program-understanding tasks simultaneously; that is the gap that this paper aims to mitigate. Specifically, first, we curate a massive, deduplicated corpus of 6M Python files from GitHub, which we use to pre-train CuBERT, an open-sourced code-understanding BERT model; and, second, we create an open-sourced benchmark that comprises five classification tasks and one program-repair task, akin to code-understanding tasks proposed in the literature before. We fine-tune CuBERT on our benchmark tasks, and compare the resulting models to different variants of Word2Vec token embeddings, BiLSTM and Transformer models, as well as published state-of-the-art models, showing that CuBERT outperforms them all, even with shorter training, and with fewer labeled examples. Future work on source-code embedding can benefit from reusing our benchmark, and comparing against CuBERT models as a strong baseline.
Recent research has achieved impressive results on understanding and improving source code by building up on machine-learning techniques developed for natural languages. A significant advancement in natural-language understanding has come with the development of pre-trained contextual embeddings, such as BERT, which can be fine-tuned for downstream tasks with less labeled data and training budget, while achieving better accuracies. However, there is no attempt yet to obtain a high-quality contextual embedding of source code, and to evaluate it on multiple program-understanding tasks simultaneously; that is the gap that this paper aims to mitigate. Specifically, first, we curate a massive, deduplicated corpus of 7.4M Python files from GitHub, which we use to pre-train CuBERT, an open-sourced code-understanding BERT model; and, second, we create an open-sourced benchmark that comprises five classification tasks and one program-repair task, akin to code-understanding tasks proposed in the literature before. We fine-tune CuBERT on our benchmark tasks, and compare the resulting models to different variants of Word2Vec token embeddings, BiLSTM and Transformer models, as well as published state-of-the-art models, showing that CuBERT outperforms them all, even with shorter training, and with fewer labeled examples. Future work on source-code embedding can benefit from reusing our benchmark, and from comparing against CuBERT models as a strong baseline.
[ { "type": "R", "before": "6M", "after": "7.4M", "start_char_pos": 721, "end_char_pos": 723, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "from", "start_char_pos": 1408, "end_char_pos": 1408, "major_intent": "fluency", "raw_intents": [ "style", "fluency", "fluency" ] } ]
[ 0, 169, 435, 605, 655, 830, 1017, 1326 ]
arxiv
2001.01037
1
This paper explains predictions of image captioning models with attention mechanisms beyond visualizing the attention itself. In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation, tailored to image captioning with attention. The result provides simultaneously pixel-wise image explanation and linguistic explanation for each word in the captions. We show that given a word in the caption to be explained, explanation methods such as LRP reveal supporting and opposing pixels as well as words. We compare the properties of attention heatmaps systematically against those computed with explanation methods such as LRP, Grad-CAM and Guided Grad-CAM. We show that explanation methods, firstly, correlate to object locations with higher precision than attention, secondly, are able to identify object words that are unsupported by image content, and thirdly, provide guidance to debias and improve the model. Results are reported for image captioning using two different attention models trained with Flickr30K and MSCOCO2017 datasets. Experimental analyses show the strength of explanation methods for understanding image captioning attention models.
This paper explains predictions of image captioning models with attention mechanisms beyond visualizing the attention itself. In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation, tailored to image captioning models with attention mechanisms. The explanations provide simultaneously pixel-wise image explanation and linguistic explanation for each word in the captions. We show that given a word in the caption to be explained, explanation methods such as LRP reveal supporting and opposing pixels as well as preceding words. We compare the properties of attention heatmaps systematically against those computed with explanation methods such as LRP, Grad-CAM and Guided Grad-CAM. We show that explanation methods, firstly, correlate to object locations with higher precision than attention, secondly, are able to identify object words that are unsupported by image content, and thirdly, provide guidance to improve and de-bias the model. Results are reported using two different image captioning attention models trained with Flickr30K and MSCOCO2017 datasets. Experimental analyses show the strength of explanation methods for understanding image captioning attention models.
[ { "type": "R", "before": "with attention. The result provides", "after": "models with attention mechanisms. The explanations provide", "start_char_pos": 266, "end_char_pos": 301, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "A", "before": null, "after": "preceding", "start_char_pos": 543, "end_char_pos": 543, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "debias and improve", "after": "improve and de-bias", "start_char_pos": 932, "end_char_pos": 950, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "D", "before": "for image captioning", "after": null, "start_char_pos": 983, "end_char_pos": 1003, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "image captioning", "start_char_pos": 1024, "end_char_pos": 1024, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] } ]
[ 0, 125, 281, 403, 550, 704, 961, 1089 ]
arxiv
2001.01037
2
This paper explains predictions of image captioning models with attention mechanisms beyond visualizing the attention itself. In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation , tailored to image captioning models with attention mechanisms. The explanations provide simultaneously pixel-wise image explanation and linguistic explanation for each word in the captions. We show that given a word in the caption to be explained, explanation methods such as LRP reveal supporting and opposing pixels as well as preceding words. We compare the properties of attention heatmaps systematically against those computed with explanation methods such as LRP, Grad-CAM and Guided Grad-CAM. We show that explanation methods , firstly, correlate to object locations with higher precisionthan attention, secondly, are able to identify object wordsthat are unsupported by image content, and thirdly, provide guidance to improve and de-bias the model. Results are reported using two different image captioning attention models trained with Flickr30K and MSCOCO2017 datasets. Experimental analyses show the strength of explanation methods for understanding image captioning attention models .
This paper interprets the predictions of image captioning models with attention mechanisms beyond visualizing the attention itself. In this paper, we develop variants of layer-wise relevance propagation (LRP) and gradient-based explanation methods , tailored to image captioning models with attention mechanisms. We compare the interpretability of attention heatmaps systematically against the explanations computed with explanation methods such as LRP, Grad-CAM , and Guided Grad-CAM. We show that explanation methods provide simultaneously pixel-wise image explanation (supporting and opposing pixels of the input image) and linguistic explanation (supporting and opposing words of the preceding sequence) for each word in the predicted captions. We demonstrate with extensive experiments that explanation methods can 1) reveal more related evidence used by the model to make decisions than attention; 2) correlate to object locations with high precision; 3) is helpful to `debug' the model such as analyzing the reasons for hallucinated object words. With the observed properties of explanations, we further design an LRP-inference fine-tuning strategy that can alleviate the object hallucination of image captioning models, meanwhile, maintain the sentence fluency. We conduct experiments with two widely used attention mechanisms: the adaptive attention mechanism calculated with the additive attention and the multi-head attention calculated with the scaled dot product .
[ { "type": "R", "before": "explains", "after": "interprets the", "start_char_pos": 11, "end_char_pos": 19, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "backpropagation", "after": "propagation", "start_char_pos": 185, "end_char_pos": 200, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "gradient backpropagation", "after": "gradient-based explanation methods", "start_char_pos": 211, "end_char_pos": 235, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "The explanations provide simultaneously pixel-wise image explanation and linguistic explanation for each word in the captions. We show that given a word in the caption to be explained, explanation methods such as LRP reveal supporting and opposing pixels as well as preceding words. We compare the properties", "after": "We compare the interpretability", "start_char_pos": 301, "end_char_pos": 609, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "those", "after": "the explanations", "start_char_pos": 655, "end_char_pos": 660, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": ",", "start_char_pos": 717, "end_char_pos": 717, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": ", firstly,", "after": "provide simultaneously pixel-wise image explanation (supporting and opposing pixels of the input image) and linguistic explanation (supporting and opposing words of the preceding sequence) for each word in the predicted captions. We demonstrate with extensive experiments that explanation methods can 1) reveal more related evidence used by the model to make decisions than attention; 2)", "start_char_pos": 772, "end_char_pos": 782, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "higher precisionthan attention, secondly, are able to identify object wordsthat are unsupported by image content, and thirdly, provide guidance to improve and de-bias the model. Results are reported using two different image captioning attention models trained with Flickr30K and MSCOCO2017 datasets. Experimental analyses show the strength of explanation methods for understanding image captioning attention models", "after": "high precision; 3) is helpful to `debug' the model such as analyzing the reasons for hallucinated object words. With the observed properties of explanations, we further design an LRP-inference fine-tuning strategy that can alleviate the object hallucination of image captioning models, meanwhile, maintain the sentence fluency. We conduct experiments with two widely used attention mechanisms: the adaptive attention mechanism calculated with the additive attention and the multi-head attention calculated with the scaled dot product", "start_char_pos": 818, "end_char_pos": 1233, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 125, 300, 427, 583, 738, 995, 1118 ]
arxiv
2001.04063
1
In this paper, we present a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale dataset (160GB) respectively. Experimental results show ProphetNet achieves the best performance on both abstractive summarization and question generation tasks compared to the models using the same base scale pre-training dataset. For the large scale dataset pre-training, ProphetNet achieves new state-of-the-art results on Gigaword and comparable results on CNN/DailyMail using only about 1/5 pre-training epochs of the previous model .
In this paper, we present a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale dataset (160GB) respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for abstractive summarization and question generation tasks . Experimental results show that ProphetNet achieves new state-of-the-art results on all these datasets compared to the models using the same scale pre-training corpus .
[ { "type": "R", "before": "Experimental results show ProphetNet achieves the best performance on both", "after": "Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for", "start_char_pos": 731, "end_char_pos": 805, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "compared to the models using the same base scale pre-training dataset. For the large scale dataset pre-training,", "after": ". Experimental results show that", "start_char_pos": 862, "end_char_pos": 974, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "R", "before": "Gigaword and comparable results on CNN/DailyMail using only about 1/5", "after": "all these datasets compared to the models using the same scale", "start_char_pos": 1027, "end_char_pos": 1096, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "R", "before": "epochs of the previous model", "after": "corpus", "start_char_pos": 1110, "end_char_pos": 1138, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 224, 479, 624, 730, 932 ]
arxiv
2001.05272
1
Chinese NER is a challenging task. As pictographs, Chinese characters contain latent glyph information , which is often overlooked. We propose the FGN , Fusion Glyph Network for Chinese NER. This method may offer glyph informationfor fusion representation learning with BERT . The major innovations of FGN include: (1) a novel CNN structure called CGS-CNN is proposed to capture glyph information from both character graphs and their neighboring graphs . (2) we provide a method with sliding window and Slice-Attention to extract interactive information between BERT representation and glyph representation . Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER. Further, more experiments are conducted to investigate the influences of various components and settings in FGN.
Chinese NER is a challenging task. As pictographs, Chinese characters contain latent glyph infor-mation , which is often overlooked. In this paper, we propose the FGN , Fusion Glyph Network for Chinese NER. Except for adding glyph information, this method may also add extra interactive infor-mation with the fusion mechanism . The major in-novations of FGN include: (1) a novel CNN struc-ture called CGS-CNN is proposed to capture both glyph information and interactive information between glyphs from neighboring characters . (2) we provide a method with sliding window and Slice-Attention to fuse the BERT representation and glyph representation for a character, which may capture potential interactive knowledge be-tween context and glyph . Experiments are con-ducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER. Further, more experiments are conducted to inves-tigate the influences of various components and settings in FGN.
[ { "type": "R", "before": "information", "after": "infor-mation", "start_char_pos": 91, "end_char_pos": 102, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "others" ] }, { "type": "R", "before": "We", "after": "In this paper, we", "start_char_pos": 132, "end_char_pos": 134, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "This method may offer glyph informationfor fusion representation learning with BERT", "after": "Except for adding glyph information, this method may also add extra interactive infor-mation with the fusion mechanism", "start_char_pos": 191, "end_char_pos": 274, "major_intent": "clarity", "raw_intents": [ "clarity", "others", "clarity" ] }, { "type": "R", "before": "innovations", "after": "in-novations", "start_char_pos": 287, "end_char_pos": 298, "major_intent": "fluency", "raw_intents": [ "fluency", "others", "fluency" ] }, { "type": "R", "before": "structure", "after": "struc-ture", "start_char_pos": 331, "end_char_pos": 340, "major_intent": "others", "raw_intents": [ "others", "fluency", "others" ] }, { "type": "R", "before": "glyph information from both character graphs and their neighboring graphs", "after": "both glyph information and interactive information between glyphs from neighboring characters", "start_char_pos": 379, "end_char_pos": 452, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "extract interactive information between", "after": "fuse the", "start_char_pos": 522, "end_char_pos": 561, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "for a character, which may capture potential interactive knowledge be-tween context and glyph", "start_char_pos": 607, "end_char_pos": 607, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "conducted", "after": "con-ducted", "start_char_pos": 626, "end_char_pos": 635, "major_intent": "fluency", "raw_intents": [ "fluency", "others", "fluency" ] }, { "type": "R", "before": "investigate", "after": "inves-tigate", "start_char_pos": 802, "end_char_pos": 813, "major_intent": "others", "raw_intents": [ "others", "others", "others" ] } ]
[ 0, 34, 131, 190, 314, 758 ]
arxiv
2001.05272
2
Chinese NER is a challenging task. As pictographs, Chinese characters contain latent glyph infor-mation , which is often overlooked. In this paper, we propose the FGN, Fusion Glyph Network for Chinese NER. Except for adding glyph information, this method may also add extra interactive infor-mation with the fusion mechanism. The major in-novations of FGN include: (1) a novel CNN struc-ture called CGS-CNN is proposed to capture both glyph information and interactive information between glyphs from neighboring characters. (2) we provide a method with sliding window and Slice-Attention to fuse the BERT representation and glyph representation for a character, which may capture potential interactive knowledge be-tween context and glyph. Experiments are con-ducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER. Further, more experiments are conducted to inves-tigate the influences of various components and settings in FGN.
Chinese NER is a challenging task. As pictographs, Chinese characters contain latent glyph information , which is often overlooked. In this paper, we propose the FGN, Fusion Glyph Network for Chinese NER. Except for adding glyph information, this method may also add extra interactive information with the fusion mechanism. The major innovations of FGN include: (1) a novel CNN structure called CGS-CNN is proposed to capture both glyph information and interactive information between glyphs from neighboring characters. (2) we provide a method with sliding window and Slice-Attention to fuse the BERT representation and glyph representation for a character, which may capture potential interactive knowledge between context and glyph. Experiments are conducted on four NER datasets, showing that FGN with LSTM-CRF as tagger achieves new state-of-the-arts performance for Chinese NER. Further, more experiments are conducted to investigate the influences of various components and settings in FGN.
[ { "type": "R", "before": "infor-mation", "after": "information", "start_char_pos": 91, "end_char_pos": 103, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "infor-mation", "after": "information", "start_char_pos": 286, "end_char_pos": 298, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "in-novations", "after": "innovations", "start_char_pos": 336, "end_char_pos": 348, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "struc-ture", "after": "structure", "start_char_pos": 381, "end_char_pos": 391, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "be-tween", "after": "between", "start_char_pos": 713, "end_char_pos": 721, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "con-ducted", "after": "conducted", "start_char_pos": 757, "end_char_pos": 767, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "inves-tigate", "after": "investigate", "start_char_pos": 934, "end_char_pos": 946, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 34, 132, 205, 325, 524, 740, 890 ]
arxiv
2001.05687
3
Although over 95 million people worldwide speak the Vietnamese language , there are not many research studies on Vietnamese machine reading comprehension (MRC), the task of understanding a text and answering questions about it. One of the reasons is because of the lack of high-quality benchmark datasets for this task. In this work, we construct a dataset which consists of 417 Vietnamese texts and 2,783 pairs of multiple-choice questions and answers . The texts are commonly used for teaching reading comprehension for elementary school pupils. In addition, we propose a lexical-based MRC technique that utilizes semantic similarity measures and external knowledge sources to analyze questions and extract answers from the given text. We compare the performance of the proposed model with several lexical-based and neural network-based baseline models. Our proposed technique achieves 61.81\% in accuracy, which is 5.51\% higher than the best baseline model. We also measure human performance on our dataset and find that there is a big gap between human and model performances. This indicates that significant progress can be made on this task. The dataset is freely available at our website for research purposes.
Although Vietnamese is the 17th most popular native-speaker language in the world , there are not many research studies on Vietnamese machine reading comprehension (MRC), the task of understanding a text and answering questions about it. One of the reasons is because of the lack of high-quality benchmark datasets for this task. In this work, we construct a dataset which consists of 2,783 pairs of multiple-choice questions and answers based on 417 Vietnamese texts which are commonly used for teaching reading comprehension for elementary school pupils. In addition, we propose a lexical-based MRC method that utilizes semantic similarity measures and external knowledge sources to analyze questions and extract answers from the given text. We compare the performance of the proposed model with several baseline lexical-based and neural network-based models. Our proposed method achieves 61.81\% by accuracy, which is 5.51\% higher than the best baseline model. We also measure human performance on our dataset and find that there is a big gap between machine-model and human performances. This indicates that significant progress can be made on this task. The dataset is freely available on our website for research purposes.
[ { "type": "R", "before": "over 95 million people worldwide speak the Vietnamese language", "after": "Vietnamese is the 17th most popular native-speaker language in the world", "start_char_pos": 9, "end_char_pos": 71, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "417 Vietnamese texts and", "after": null, "start_char_pos": 375, "end_char_pos": 399, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": ". The texts", "after": "based on 417 Vietnamese texts which", "start_char_pos": 453, "end_char_pos": 464, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "technique", "after": "method", "start_char_pos": 592, "end_char_pos": 601, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "baseline", "start_char_pos": 800, "end_char_pos": 800, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "baseline", "after": null, "start_char_pos": 840, "end_char_pos": 848, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] }, { "type": "R", "before": "technique", "after": "method", "start_char_pos": 870, "end_char_pos": 879, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "in", "after": "by", "start_char_pos": 897, "end_char_pos": 899, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "human and model", "after": "machine-model and human", "start_char_pos": 1053, "end_char_pos": 1068, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "at", "after": "on", "start_char_pos": 1182, "end_char_pos": 1184, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] } ]
[ 0, 227, 319, 454, 547, 737, 856, 962, 1082, 1149 ]
arxiv
2001.07676
2
Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with "task descriptions" in natural language (e.g., Radford et al., 2019). While this approach underperforms its supervised counterpart, we show in this work that the two ideas can be combined: We introduce Pattern-Exploiting Training (PET), a semi-supervised training procedure that reformulates input examples as cloze-style phrases to help language models understand a given task. These phrases are then used to assign soft labels to a large set of unlabeled examples. Finally, regular supervised training is performed on the resulting training set. For several tasks and languages, PET outperforms both supervised training and unsupervised approaches in low-resource settings by a large margin.
Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with "task descriptions" in natural language (e.g., Radford et al., 2019). While this approach underperforms its supervised counterpart, we show in this work that the two ideas can be combined: We introduce Pattern-Exploiting Training (PET), a semi-supervised training procedure that reformulates input examples as cloze-style phrases to help language models understand a given task. These phrases are then used to assign soft labels to a large set of unlabeled examples. Finally, standard supervised training is performed on the resulting training set. For several tasks and languages, PET outperforms supervised training and strong semi-supervised approaches in low-resource settings by a large margin.
[ { "type": "R", "before": "regular", "after": "standard", "start_char_pos": 583, "end_char_pos": 590, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "both", "after": null, "start_char_pos": 704, "end_char_pos": 708, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": "unsupervised", "after": "strong semi-supervised", "start_char_pos": 733, "end_char_pos": 745, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] } ]
[ 0, 176, 485, 573, 654 ]
arxiv
2001.08604
1
Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models are used to augment the training dataset, benefit certain NLP tasks. In this work, we extend this approach to the task of dialogue state tracking for goal-oriented dialogues, in which the data naturally exhibits a hierarchical structure over utterances and related annotations . Deep generative data augmentation for dialogue state tracking requires the generative model to be aware of the hierarchically structured data . We propose Variational Hierarchical Dialog Autoencoder (VHDA) for modeling various aspects of goal-oriented dialogues , including linguistic and underlying annotation structures. Our experiments show that our model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialogue state trackers, ultimately improving their final dialogue state tracking performances on several datasets .
Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models are used to augment the training dataset, benefit certain NLP tasks. In this work, we extend this approach to the task of dialog state tracking for goal-oriented dialogs. Since, goal-oriented dialogs naturally exhibit a hierarchical structure over utterances and related annotations , deep generative data augmentation for the task requires the generative model to be aware of the hierarchical nature . We propose the Variational Hierarchical Dialog Autoencoder (VHDA) for modeling complete aspects of goal-oriented dialogs , including linguistic features and underlying structured annotations, namely dialog acts and goals. We also propose two training policies to mitigate issues that arise with training VAE-based models. Experiments show that our hierarchical model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialog state trackers, ultimately improving the dialog state tracking performances on various dialog domains. Surprisingly, the ability to jointly generate dialog features enables our model to outperform previous state-of-the-arts in related subtasks, such as language generation and user simulation .
[ { "type": "R", "before": "dialogue", "after": "dialog", "start_char_pos": 243, "end_char_pos": 251, "major_intent": "fluency", "raw_intents": [ "fluency", "clarity", "fluency" ] }, { "type": "R", "before": "dialogues, in which the data naturally exhibits", "after": "dialogs. Since, goal-oriented dialogs naturally exhibit", "start_char_pos": 285, "end_char_pos": 332, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "R", "before": ". Deep", "after": ", deep", "start_char_pos": 398, "end_char_pos": 404, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "coherence" ] }, { "type": "R", "before": "dialogue state tracking", "after": "the task", "start_char_pos": 438, "end_char_pos": 461, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "hierarchically structured data", "after": "hierarchical nature", "start_char_pos": 511, "end_char_pos": 541, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "A", "before": null, "after": "the", "start_char_pos": 555, "end_char_pos": 555, "major_intent": "fluency", "raw_intents": [ "coherence", "fluency", "fluency" ] }, { "type": "R", "before": "various", "after": "complete", "start_char_pos": 620, "end_char_pos": 627, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "style" ] }, { "type": "R", "before": "dialogues", "after": "dialogs", "start_char_pos": 653, "end_char_pos": 662, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "and underlying annotation structures. Our experiments", "after": "features and underlying structured annotations, namely dialog acts and goals. We also propose two training policies to mitigate issues that arise with training VAE-based models. Experiments", "start_char_pos": 686, "end_char_pos": 739, "major_intent": "meaning-changed", "raw_intents": [ "coherence", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "hierarchical", "start_char_pos": 754, "end_char_pos": 754, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "style" ] }, { "type": "R", "before": "dialogue", "after": "dialog", "start_char_pos": 857, "end_char_pos": 865, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "their final dialogue", "after": "the dialog", "start_char_pos": 903, "end_char_pos": 923, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "several datasets", "after": "various dialog domains. Surprisingly, the ability to jointly generate dialog features enables our model to outperform previous state-of-the-arts in related subtasks, such as language generation and user simulation", "start_char_pos": 955, "end_char_pos": 971, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 189, 399, 543, 723 ]
arxiv
2001.08604
2
Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models are used to augment the training dataset, benefit certain NLP tasks. In this work, we extend this approach to the task of dialog state tracking for goal-oriented dialogs. Since, goal-oriented dialogs naturally exhibit a hierarchical structure over utterances and related annotations, deep generative data augmentation for the task requires the generative model to be aware of the hierarchical nature . We propose the Variational Hierarchical Dialog Autoencoder (VHDA) for modeling complete aspects of goal-oriented dialogs, including linguistic features and underlying structured annotations, namely dialog acts and goals. We also propose two training policies to mitigate issues that arise with training VAE-based models. Experiments show that our hierarchical model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialog state trackers, ultimately improving the dialog state tracking performances on various dialog domains. Surprisingly, the ability to jointly generate dialog features enables our model to outperform previous state-of-the-arts in related subtasks, such as language generation and user simulation .
Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models complement the training dataset, benefit NLP tasks. In this work, we extend this approach to the task of dialog state tracking for goal-oriented dialogs. Due to the inherent hierarchical structure of goal-oriented dialogs over utterances and related annotations, the deep generative model must be capable of capturing the coherence among different hierarchies and types of dialog features . We propose the Variational Hierarchical Dialog Autoencoder (VHDA) for modeling the complete aspects of goal-oriented dialogs, including linguistic features and underlying structured annotations, namely speaker information, dialog acts, and goals. The proposed architecture is designed to model each aspect of goal-oriented dialogs using inter-connected latent variables and learns to generate coherent goal-oriented dialogs from the latent spaces. To overcome training issues that arise from training complex variational models, we propose appropriate training strategies. Experiments on various dialog datasets show that our model improves the downstream dialog trackers' robustness via generative data augmentation. We also discover additional benefits of our unified approach to modeling goal-oriented dialogs: dialog response generation and user simulation , where our model outperforms previous strong baselines .
[ { "type": "R", "before": "are used to augment", "after": "complement", "start_char_pos": 121, "end_char_pos": 140, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "certain", "after": null, "start_char_pos": 171, "end_char_pos": 178, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "Since,", "after": "Due to the inherent hierarchical structure of", "start_char_pos": 292, "end_char_pos": 298, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": "dialogs naturally exhibit a hierarchical structure", "after": "dialogs", "start_char_pos": 313, "end_char_pos": 363, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "deep generative data augmentation for the task requires the generative model to be aware of the hierarchical nature", "after": "the deep generative model must be capable of capturing the coherence among different hierarchies and types of dialog features", "start_char_pos": 405, "end_char_pos": 520, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "A", "before": null, "after": "the", "start_char_pos": 602, "end_char_pos": 602, "major_intent": "fluency", "raw_intents": [ "fluency", "coherence", "fluency" ] }, { "type": "R", "before": "dialog acts", "after": "speaker information, dialog acts,", "start_char_pos": 722, "end_char_pos": 733, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "We also propose two training policies to mitigate", "after": "The proposed architecture is designed to model each aspect of goal-oriented dialogs using inter-connected latent variables and learns to generate coherent goal-oriented dialogs from the latent spaces. To overcome training", "start_char_pos": 745, "end_char_pos": 794, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "style", "meaning-changed" ] }, { "type": "R", "before": "with training VAE-based models. Experiments", "after": "from training complex variational models, we propose appropriate training strategies. Experiments on various dialog datasets", "start_char_pos": 813, "end_char_pos": 856, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "hierarchical model is able to generate realistic and novel samples that improve the robustness of state-of-the-art dialog state trackers, ultimately improving the dialog state tracking performances on various dialog domains. Surprisingly, the ability to jointly generate dialog features enables our model to outperform previous state-of-the-arts in related subtasks, such as language", "after": "model improves the downstream dialog trackers' robustness via generative data augmentation. We also discover additional benefits of our unified approach to modeling goal-oriented dialogs: dialog response", "start_char_pos": 871, "end_char_pos": 1254, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": ", where our model outperforms previous strong baselines", "start_char_pos": 1286, "end_char_pos": 1286, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 189, 291, 522, 744, 844, 1095 ]
arxiv
2001.11453
1
Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of annotated data. How can neural models make sample-efficient generalizations from task-language combinations with available data to low-resource ones? In this work, we propose a Bayesian generative model for the space of neural parameters. We assume that this space can be factorized into latent variables for each language and each task. We infer the posteriors over such latent variables based on data from seen task-language combinations through variational inference. This enables zero-shot classification on unseen combinations at prediction time. For instance, given training data for named entity recognition (NER) in Vietnamese and for part-of-speech (POS) tagging in Wolof, our model can perform accurate predictions for NER in Wolof. In particular, we experiment with a typologically diverse sample of 33 languages from 4 continents and 11 families, and show that our model yields comparable or better results than state-of-the-art, zero-shot cross-lingual transfer methods ; it increases performance by 4.49 points for POS tagging and 7.73 points for NER on average compared to the strongest baseline .
Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of annotated data. How can neural models make sample-efficient generalizations from task--language combinations with available data to low-resource ones? In this work, we propose a Bayesian generative model for the space of neural parameters. We assume that this space can be factorized into latent variables for each language and each task. We infer the posteriors over such latent variables based on data from seen task--language combinations through variational inference. This enables zero-shot classification on unseen combinations at prediction time. For instance, given training data for named entity recognition (NER) in Vietnamese and for part-of-speech (POS) tagging in Wolof, our model can perform accurate predictions for NER in Wolof. In particular, we experiment with a typologically diverse sample of 33 languages from 4 continents and 11 families, and show that our model yields comparable or better results than state-of-the-art, zero-shot cross-lingual transfer methods . Moreover, we demonstrate that approximate Bayesian model averaging results in smoother predictive distributions, whose entropy strongly correlates with accuracy. Hence, the proposed framework also offers robust estimates of uncertainty .
[ { "type": "R", "before": "task-language", "after": "task--language", "start_char_pos": 209, "end_char_pos": 222, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "others" ] }, { "type": "R", "before": "task-language", "after": "task--language", "start_char_pos": 541, "end_char_pos": 554, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "others" ] }, { "type": "R", "before": "; it increases performance by 4.49 points for POS tagging and 7.73 points for NER on average compared to the strongest baseline", "after": ". Moreover, we demonstrate that approximate Bayesian model averaging results in smoother predictive distributions, whose entropy strongly correlates with accuracy. Hence, the proposed framework also offers robust estimates of uncertainty", "start_char_pos": 1111, "end_char_pos": 1238, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 143, 277, 366, 465, 598, 679, 870, 1112 ]
null
2001.11453
2
Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of annotated data. How can neural models make sample-efficient generalizations from task--language combinations with available data to low-resource ones? In this work, we propose a Bayesian generative model for the space of neural parameters. We assume that this space can be factorized into latent variables for each language and each task. We infer the posteriors over such latent variables based on data from seen task--language combinations through variational inference. This enables zero-shot classification on unseen combinations at prediction time. For instance, given training data for named entity recognition (NER) in Vietnamese and for part-of-speech (POS) tagging in Wolof, our model can perform accurate predictions for NER in Wolof. In particular, we experiment with a typologically diverse sample of 33 languages from 4 continents and 11 families, and show that our model yields comparable or better results than state-of-the-art, zero-shot cross-lingual transfer methods. Moreover, we demonstrate that approximate Bayesian model averaging results in smoother predictive distributions, whose entropy strongly correlates with accuracy. Hence, the proposed framework also offers robust estimates of uncertainty.
Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of annotated data. How can neural models make sample-efficient generalizations from task-language combinations with available data to low-resource ones? In this work, we propose a Bayesian generative model for the space of neural parameters. We assume that this space can be factorized into latent variables for each language and each task. We infer the posteriors over such latent variables based on data from seen task-language combinations through variational inference. This enables zero-shot classification on unseen combinations at prediction time. For instance, given training data for named entity recognition (NER) in Vietnamese and for part-of-speech (POS) tagging in Wolof, our model can perform accurate predictions for NER in Wolof. In particular, we experiment with a typologically diverse sample of 33 languages from 4 continents and 11 families, and show that our model yields comparable or better results than state-of-the-art, zero-shot cross-lingual transfer methods. Moreover, we demonstrate that approximate Bayesian model averaging results in smoother predictive distributions, whose entropy inversely correlates with accuracy. Hence, the proposed framework also offers robust estimates of prediction uncertainty. Our code is located at github.com/cambridgeltl/parameter-factorization
[ { "type": "R", "before": "task--language", "after": "task-language", "start_char_pos": 209, "end_char_pos": 223, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "others" ] }, { "type": "R", "before": "task--language", "after": "task-language", "start_char_pos": 542, "end_char_pos": 556, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "others" ] }, { "type": "R", "before": "strongly", "after": "inversely", "start_char_pos": 1241, "end_char_pos": 1249, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "style" ] }, { "type": "R", "before": "uncertainty.", "after": "prediction uncertainty. Our code is located at github.com/cambridgeltl/parameter-factorization", "start_char_pos": 1338, "end_char_pos": 1350, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 143, 278, 367, 466, 600, 681, 872, 1113, 1275 ]
null
2002.06353
1
We propose UniViLM: a Unified Video and Language pre-training Model for multimodal understanding and generation. Motivated by the recent success of BERT based pre-training technique for NLP and image-language tasks, VideoBERT and CBT are proposed to exploit BERT model for video and language pre-training using narrated instructional videos. Different from their works which only pre-train understanding task, we propose a unified video-language pre-training model for both understanding and generation tasks . Our model comprises of 4 components including two single-modal encoders, a cross encoder and a decoder with the Transformer backbone. We first pre-train our model to learn the universal representation for both video and language on a large instructional video dataset. Then we fine-tune the model on two multimodal tasks including understanding task (text-based video retrieval) and generation task (multimodal video captioning). Our extensive experiments show that our method can improve the performance of both understanding and generation tasks and achieves the state-of-the art results.
With the recent success of pre-training technique for NLP and image-linguistic tasks, there are still few works on video-linguistic pre-training . Besides, most of the existing multimodal models are pre-trained for understanding task, which leads to a pretrain-finetune discrepency for generation tasks. In this paper, we propose UniViLM: a Unified Video and Language pre-training Model for both multimodal understanding and generation . Our model comprises of 4 components including two single-modal encoders, a cross encoder and a decoder with the Transformer backbone. We first pre-train our model to learn the universal representation for both video and language on a large instructional video dataset. Then we fine-tune the model on two multimodal tasks including understanding task (text-based video retrieval) and generation task (multimodal video captioning). Our extensive experiments show that our method can improve the performance of both understanding and generation tasks and achieves the state-of-the art results.
[ { "type": "R", "before": "We propose UniViLM: a Unified Video and Language pre-training Model for multimodal understanding and generation. Motivated by", "after": "With", "start_char_pos": 0, "end_char_pos": 125, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "D", "before": "BERT based", "after": null, "start_char_pos": 148, "end_char_pos": 158, "major_intent": "coherence", "raw_intents": [ "style", "coherence", "coherence" ] }, { "type": "R", "before": "image-language tasks, VideoBERT and CBT are proposed to exploit BERT model for video and language", "after": "image-linguistic tasks, there are still few works on video-linguistic", "start_char_pos": 194, "end_char_pos": 291, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "using narrated instructional videos. Different from their works which only pre-train", "after": ". Besides, most of the existing multimodal models are pre-trained for", "start_char_pos": 305, "end_char_pos": 389, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "clarity" ] }, { "type": "R", "before": "we propose a unified video-language", "after": "which leads to a pretrain-finetune discrepency for generation tasks. In this paper, we propose UniViLM: a Unified Video and Language", "start_char_pos": 410, "end_char_pos": 445, "major_intent": "meaning-changed", "raw_intents": [ "coherence", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "model for both", "after": "Model for both multimodal", "start_char_pos": 459, "end_char_pos": 473, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "tasks", "after": null, "start_char_pos": 503, "end_char_pos": 508, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] } ]
[ 0, 112, 341, 510, 644, 779, 940 ]
arxiv
2002.09253
1
Autonomous reinforcement learning agents must be intrinsically motivated to explore their environment, discover potential goals, represent them and learn how to achieve them. As children do the same, they benefit from exposure to language, using it to formulate goals and imagine new ones as they learn their meaning. In our proposed learning architecture (IMAGINE), the agent freely explores its environment and turns natural language descriptions of interesting interactions from a social partner into potential goals. IMAGINE learns to represent goals by jointly learning a language model and a goal-conditioned reward function. Just like humans, our agent uses language compositionality to generate new goals by composing known ones . Leveraging modular model architectures based on Deep Sets and gated-attention mechanisms, IMAGINE autonomously builds a repertoire of behaviors and shows good zero-shot generalization properties for various types of generalization. When imagining its own goals, the agent leverages zero-shot generalization of the reward function to further train on imagined goals and refine its behavior. We present experiments in a simulated domain where the agent interacts with procedurally generated scenes containing objects of various types and colors, discovers goals, imagines others and learns to achieve them.
Autonomous reinforcement learning agents must be intrinsically motivated to explore their environment, discover potential goals, represent them and learn how to achieve them. As children do the same, they benefit from exposure to language, using it to formulate goals and imagine new ones as they learn their meaning. In our proposed learning architecture (IMAGINE), the agent freely explores its environment and turns natural language descriptions of interesting interactions from a social partner into potential goals. IMAGINE learns to represent goals by jointly learning a language encoder and a goal-conditioned reward function. Just like humans, our agent uses language compositionality to generate new goals by composing known ones , using an algorithm grounded in construction grammar models of child language acquisition . Leveraging modular model architectures based on deepsets and gated attention mechanisms, IMAGINE autonomously builds a repertoire of behaviors and shows good zero-shot generalization properties for various types of generalization. When imagining its own goals, the agent leverages zero-shot generalization of the reward function to further train on imagined goals and refine its behavior. We present experiments in a simulated domain where the agent interacts with procedurally generated scenes containing objects of various types and colors, discovers goals, imagines others and learns to achieve them.
[ { "type": "R", "before": "model", "after": "encoder", "start_char_pos": 586, "end_char_pos": 591, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": ", using an algorithm grounded in construction grammar models of child language acquisition", "start_char_pos": 737, "end_char_pos": 737, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": "Deep Sets and gated-attention", "after": "deepsets and gated attention", "start_char_pos": 788, "end_char_pos": 817, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] } ]
[ 0, 174, 317, 520, 631, 971, 1129 ]
arxiv
2002.09253
2
Autonomous reinforcement learning agents must be intrinsically motivated to explore their environment, discover potential goals, represent them and learn how to achieve them. As children do the same, they benefit from exposure to language, using it to formulate goals and imagine new ones as they learn their meaning. In our proposed learning architecture (IMAGINE), the agent freely explores its environment and turns natural language descriptions of interesting interactions from a social partner into potential goals. IMAGINE learns to represent goalsby jointly learning a language encoder and a goal-conditioned reward function . Just like humans, our agent uses language compositionality to generate new goals by composing known ones, using an algorithm grounded in construction grammar models of child language acquisition. Leveraging modular model architectures based on deepsets and gated attention mechanisms, IMAGINE autonomously builds a repertoire of behaviors and shows good zero-shot generalization properties for various types of generalization. When imagining its own goals, the agent leverages zero-shot generalization of the reward function to further train on imagined goals and refine its behavior. We present experiments in a simulated domain where the agent interacts with procedurally generated scenes containing objects of various types and colors, discovers goals, imagines others and learns to achieve them .
Developmental machine learning studies how artificial agents can model the way children learn open-ended repertoires of skills. Such agents need to create and represent goals, select which ones to pursue and learn to achieve them. Recent approaches have considered goal spaces that were either fixed and hand-defined or learned using generative models of states. This limited agents to sample goals within the distribution of known effects. We argue that the ability to imagine out-of-distribution goals is key to enable creative discoveries and open-ended learning. Children do so by leveraging the compositionality of language as a tool to imagine descriptions of outcomes they never experienced before, targeting them as goals during play. We introduce Imagine, an intrinsically motivated deep reinforcement learning architecture that models this ability. Such imaginative agents, like children, benefit from the guidance of a social peer who provides language descriptions. To take advantage of goal imagination, agents must be able to leverage these descriptions to interpret their imagined out-of-distribution goals. This generalization is made possible by modularity: a decomposition between learned goal-achievement reward function and policy relying on deep sets, gated attention and object-centered representations. We introduce the Playground environment and study how this form of goal imagination improves generalization and exploration over agents lacking this capacity. In addition, we identify the properties of goal imagination that enable these results and study the impacts of modularity and social interactions .
[ { "type": "R", "before": "Autonomous reinforcement learning agents must be intrinsically motivated to explore their environment, discover potential goals, represent them and learn how", "after": "Developmental machine learning studies how artificial agents can model the way children learn open-ended repertoires of skills. Such agents need to create and represent goals, select which ones to pursue and learn", "start_char_pos": 0, "end_char_pos": 157, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "As children do the same, they benefit from exposure to language, using it to formulate goals and imagine new ones as they learn their meaning. In our proposed learning architecture (IMAGINE), the agent freely explores its environment and turns natural language descriptions of interesting interactions from a social partner into potential goals. IMAGINE learns to represent goalsby jointly learning a language encoder and a goal-conditioned reward function . Just like humans, our agent uses language compositionality to generate new goals by composing known ones, using an algorithm grounded in construction grammar models of child language acquisition. Leveraging modular model architectures based on deepsets and gated attention mechanisms, IMAGINE autonomously builds a repertoire of behaviors and shows good zero-shot generalization properties for various types of generalization. When imagining its own goals, the agent leverages zero-shot generalization of the reward function to further train on imagined goals and refine its behavior. We present experiments in a simulated domain where the agent interacts with procedurally generated scenes containing objects of various types and colors, discovers goals, imagines others and learns to achieve them", "after": "Recent approaches have considered goal spaces that were either fixed and hand-defined or learned using generative models of states. This limited agents to sample goals within the distribution of known effects. We argue that the ability to imagine out-of-distribution goals is key to enable creative discoveries and open-ended learning. Children do so by leveraging the compositionality of language as a tool to imagine descriptions of outcomes they never experienced before, targeting them as goals during play. We introduce Imagine, an intrinsically motivated deep reinforcement learning architecture that models this ability. Such imaginative agents, like children, benefit from the guidance of a social peer who provides language descriptions. To take advantage of goal imagination, agents must be able to leverage these descriptions to interpret their imagined out-of-distribution goals. This generalization is made possible by modularity: a decomposition between learned goal-achievement reward function and policy relying on deep sets, gated attention and object-centered representations. We introduce the Playground environment and study how this form of goal imagination improves generalization and exploration over agents lacking this capacity. In addition, we identify the properties of goal imagination that enable these results and study the impacts of modularity and social interactions", "start_char_pos": 175, "end_char_pos": 1432, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] } ]
[ 0, 174, 317, 520, 829, 1060, 1218 ]
arxiv
2002.09616
1
Producing natural and accurate responses like human beings is the ultimate goal of intelligent dialogue agents. So far, most of the past works concentrate on selecting or generating one pertinent and fluent response according to current query and its context. These models work on a one-to-one environment, making one response to one utterance each round.However, in real human-human conversations, human often sequentially sends several short messages for readability instead of a long message in one turn. Thus messages will not end with an explicit ending signal, which is crucial for agents to decide when to reply. So the first step for an intelligent dialogue agent is not replying but deciding if it should reply at the moment. To address this issue, in this paper , we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. Our method has two imaginator modules and an arbitrator module. The two imaginators will learn the agent's and user's speaking style respectively, generate possible utterances as the input of the arbitrator, combining with dialogue history. And the arbitrator decides whether to wait or to make a response to the user directly. To verify the performance and effectiveness of our method, we prepared two dialogue datasets and compared our approach with several popular models. Experimental results show that our model performs well on addressing ending prediction issue and outperforms baseline models .
Different people have different habits of describing their intents in conversations. Some people may tend to deliberate their full intents in several successive utterances, i.e., they use several consistent messages for readability instead of a long sentence to express their question. This creates a predicament faced by dialogue systems' application, especially in real-world industrial scenarios, in which the dialogue system is unsure that whether it should answer the user's query immediately or wait for users' further supplementary input. Motivated by such interesting quandary, we define a novel task: Wait-or-Answer to better tackle this dilemma faced by dialogue systems. We shed light on a new research topic about how the dialogue system can be more competent to behave in this Wait-or-Answer quandary. Further , we propose a predictive approach dubbed Imagine-then-Arbitrate (ITA) to resolve this Wait-or-Answer task. More specifically, we take advantage of an arbitrator model to help the dialogue system decide to wait or answer. The arbitrator's decision is made with the assistance of two ancillary imaginator models: a wait imaginator and an answer imaginator. The wait imaginator tries to predict what the user would supplement and use its prediction to persuade the arbitrator that the user has some information to add, so the dialogue system should wait. The answer imaginator, nevertheless, struggles to predict the answer of the dialogue system and convince the arbitrator that it's a superior choice to answer the users' query immediately. To our best knowledge, our paper is the first work to explicitly define the Wait-or-Answer task in the dialogue system. Additionally, our proposed ITA approach significantly outperforms the existing models in solving this Wait-or-Answer problem .
[ { "type": "R", "before": "Producing natural and accurate responses like human beings is the ultimate goal of intelligent dialogue agents. So far, most of the past works concentrate on selecting or generating one pertinent and fluent response according to current query and its context. These models work on a one-to-one environment, making one response to one utterance each round.However, in real human-human conversations, human often sequentially sends several short", "after": "Different people have different habits of describing their intents in conversations. Some people may tend to deliberate their full intents in several successive utterances, i.e., they use several consistent", "start_char_pos": 0, "end_char_pos": 443, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "message in one turn. Thus messages will not end with an explicit ending signal, which is crucial for agents to decide when to reply. So the first step for an intelligent dialogue agent is not replying but deciding if it should reply at the moment. To address this issue, in this paper", "after": "sentence to express their question. This creates a predicament faced by dialogue systems' application, especially in real-world industrial scenarios, in which the dialogue system is unsure that whether it should answer the user's query immediately or wait for users' further supplementary input. Motivated by such interesting quandary, we define a novel task: Wait-or-Answer to better tackle this dilemma faced by dialogue systems. We shed light on a new research topic about how the dialogue system can be more competent to behave in this Wait-or-Answer quandary. Further", "start_char_pos": 487, "end_char_pos": 771, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "novel", "after": "predictive approach dubbed", "start_char_pos": 787, "end_char_pos": 792, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "neural dialogue", "after": "to resolve this Wait-or-Answer task. More specifically, we take advantage of an arbitrator", "start_char_pos": 822, "end_char_pos": 837, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "agent decide whether", "after": "dialogue system decide", "start_char_pos": 856, "end_char_pos": 876, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "to make a response directly. Our method has two imaginator modules and an arbitrator module. The two imaginators will learn the agent's and user's speaking style respectively, generate possible utterances as the input of the arbitrator, combining with dialogue history. And the arbitrator decides whether to wait or to make a response to the user directly. To verify the performance and effectiveness of our method, we prepared two dialogue datasets and compared our approach with several popular models. Experimental results show that our model performs well on addressing ending prediction issue and outperforms baseline models", "after": "answer. The arbitrator's decision is made with the assistance of two ancillary imaginator models: a wait imaginator and an answer imaginator. The wait imaginator tries to predict what the user would supplement and use its prediction to persuade the arbitrator that the user has some information to add, so the dialogue system should wait. The answer imaginator, nevertheless, struggles to predict the answer of the dialogue system and convince the arbitrator that it's a superior choice to answer the users' query immediately. To our best knowledge, our paper is the first work to explicitly define the Wait-or-Answer task in the dialogue system. Additionally, our proposed ITA approach significantly outperforms the existing models in solving this Wait-or-Answer problem", "start_char_pos": 888, "end_char_pos": 1517, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 111, 259, 355, 507, 619, 734, 916, 980, 1157, 1244, 1392 ]
arxiv
2002.09616
2
Different people have different habits of describing their intents in conversations. Some people may tend to deliberate their full intents in several successive utterances, i. e., they use several consistent messages for readability instead of a long sentence to express their question. This creates a predicament faced by dialogue systems' application, especially in real-world industrial scenarios, in which the dialogue system is unsure that whether it should answer the user's query immediately or wait for users' further supplementary input. Motivated by such interesting quandary, we define a novel task: Wait-or-Answer to better tackle this dilemma faced by dialogue systems. We shed light on a new research topic about how the dialogue system can be more competent to behave in this Wait-or-Answer quandary. Further , we propose a predictive approach dubbed Imagine-then-Arbitrate (ITA) to resolve this Wait-or-Answer task. More specifically, we take advantage of an arbitrator model to help the dialogue system decide to wait or answer. The arbitrator's decision is made with the assistance of two ancillary imaginator models: a wait imaginator and an answer imaginator. The wait imaginator tries to predict what the user would supplement and use its prediction to persuade the arbitrator that the userhas some information to add, so the dialogue system should wait. The answer imaginator, nevertheless, struggles to predict the answer of the dialogue system and convince the arbitrator that it's a superior choice to answer the users' query immediately. To our best knowledge, our paper is the first work to explicitly define the Wait-or-Answer task in the dialogue system. Additionally, our proposed ITA approach significantly outperforms the existing modelsin solving this Wait-or-Answer problem .
Producing natural and accurate responses like human beings is the ultimate goal of intelligent dialogue agents. So far, most of the past works concentrate on selecting or generating one pertinent and fluent response according to current query and its context. These models work on a one-to-one environment, making one response to one utterance each round. However, in real human-human conversations, human often sequentially sends several short messages for readability instead of a long message in one turn. Thus messages will not end with an explicit ending signal, which is crucial for agents to decide when to reply. So the first step for an intelligent dialogue agent is not replying but deciding if it should reply at the moment. To address this issue, in this paper , we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. Our method has two imaginator modules and an arbitrator module. The two imaginators will learn the agent's and user's speaking style respectively, generate possible utterances as the input of the arbitrator, combining with dialogue history. And the arbitrator decides whether to wait or to make a response to the user directly. To verify the performance and effectiveness of our method, we prepared two dialogue datasets and compared our approach with several popular models. Experimental results show that our model performs well on addressing ending prediction issue and outperforms baseline models .
[ { "type": "R", "before": "Different people have different habits of describing their intents in conversations. Some people may tend to deliberate their full intents in several successive utterances, i. e., they use several consistent", "after": "Producing natural and accurate responses like human beings is the ultimate goal of intelligent dialogue agents. So far, most of the past works concentrate on selecting or generating one pertinent and fluent response according to current query and its context. These models work on a one-to-one environment, making one response to one utterance each round. However, in real human-human conversations, human often sequentially sends several short", "start_char_pos": 0, "end_char_pos": 207, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "sentence to express their question. This creates a predicament faced by dialogue systems' application, especially in real-world industrial scenarios, in which the dialogue system is unsure that whether it should answer the user's query immediately or wait for users' further supplementary input. Motivated by such interesting quandary, we define a novel task: Wait-or-Answer to better tackle this dilemma faced by dialogue systems. We shed light on a new research topic about how the dialogue system can be more competent to behave in this Wait-or-Answer quandary. Further", "after": "message in one turn. Thus messages will not end with an explicit ending signal, which is crucial for agents to decide when to reply. So the first step for an intelligent dialogue agent is not replying but deciding if it should reply at the moment. To address this issue, in this paper", "start_char_pos": 251, "end_char_pos": 823, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "predictive approach dubbed", "after": "novel", "start_char_pos": 839, "end_char_pos": 865, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "to resolve this Wait-or-Answer task. More specifically, we take advantage of an arbitrator", "after": "neural dialogue", "start_char_pos": 895, "end_char_pos": 985, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "coherence" ] }, { "type": "R", "before": "dialogue system decide", "after": "agent decide whether", "start_char_pos": 1004, "end_char_pos": 1026, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "answer. The arbitrator's decision is made with the assistance of two ancillary imaginator models: a wait imaginator and an answer imaginator. The wait imaginator tries to predict what the user would supplement and use its prediction to persuade the arbitrator that the userhas some information to add, so the dialogue system should wait. The answer imaginator, nevertheless, struggles to predict the answer of the dialogue system and convince the arbitrator that it's a superior choice to answer the users' query immediately. To our best knowledge, our paper is the first work to explicitly define the Wait-or-Answer task in the dialogue system. Additionally, our proposed ITA approach significantly outperforms the existing modelsin solving this Wait-or-Answer problem", "after": "to make a response directly. Our method has two imaginator modules and an arbitrator module. The two imaginators will learn the agent's and user's speaking style respectively, generate possible utterances as the input of the arbitrator, combining with dialogue history. And the arbitrator decides whether to wait or to make a response to the user directly. To verify the performance and effectiveness of our method, we prepared two dialogue datasets and compared our approach with several popular models. Experimental results show that our model performs well on addressing ending prediction issue and outperforms baseline models", "start_char_pos": 1038, "end_char_pos": 1807, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 84, 286, 546, 682, 815, 931, 1045, 1179, 1375, 1563, 1683 ]
arxiv
2002.10107
2
Community Question-Answering websites, such as StackOverflow and Quora, expect users to follow specific guidelines in order to maintain content quality. These systems mainly rely on community reports for assessing contents, which has serious problems such as the slow handling of violations, the loss of normal and experienced users' time, the low quality of some reports, and discouraging feedback to new users. Therefore, with the overall goal of providing solutions for automating moderation actions in Q&A websites, we aim to provide a model to predict 20 quality or subjective aspects of questions in QA websites. To this end, we used data gathered by the CrowdSource team at Google Research in 2019 and fine-tuned pre-trained BERT model on our problem. Based on evaluation by Mean-Squared-Error (MSE), model achieved the value of 0.046 after 2 epochs of training, which did not improve substantially in the next ones. Results confirm that by simple fine-tuning, we can achieve accurate models in little time and on less amount of data.
Community Question-Answering websites, such as StackOverflow and Quora, expect users to follow specific guidelines in order to maintain content quality. These systems mainly rely on community reports for assessing contents, which has serious problems such as the slow handling of violations, the loss of normal and experienced users' time, the low quality of some reports, and discouraging feedback to new users. Therefore, with the overall goal of providing solutions for automating moderation actions in Q&A websites, we aim to provide a model to predict 20 quality or subjective aspects of questions in QA websites. To this end, we used data gathered by the CrowdSource team at Google Research in 2019 and a fine-tuned pre-trained BERT model on our problem. Based on the evaluation by Mean-Squared-Error (MSE), the model achieved a value of 0.046 after 2 epochs of training, which did not improve substantially in the next ones. Results confirm that by simple fine-tuning, we can achieve accurate models in little time and on less amount of data.
[ { "type": "A", "before": null, "after": "a", "start_char_pos": 709, "end_char_pos": 709, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": "the", "start_char_pos": 769, "end_char_pos": 769, "major_intent": "fluency", "raw_intents": [ "clarity", "fluency", "fluency" ] }, { "type": "R", "before": "model achieved the", "after": "the model achieved a", "start_char_pos": 810, "end_char_pos": 828, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 152, 412, 618, 759, 925 ]
arxiv
2003.02645
1
We introduce sentenceMIM, a probabilistic auto-encoder for language modelling, trained with Mutual Information Machine (MIM) learning. Previous attempts to learn variational auto-encoders for language data ? have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the ? occurrence of posterior collapse with VAEs. The recently proposed MIM framework encourages high mutual information between observations and latent variables, and is more robust against posterior collapse. This paper formulates a MIM model for text data, along with a corresponding learning algorithm. We demonstrate excellent perplexity (PPL) results on several datasets, and show that the framework learns a rich latent space, allowing for interpolation between sentences of different lengths with a fixed-dimensional latent representation. We also demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering, a transfer learning task, without fine-tuning. To the best of our knowledge, this is the first latent variable model (LVM) for text modelling that achieves competitive performance with non-LVM models.
We introduce sentenceMIM, a probabilistic auto-encoder for language modelling, trained with Mutual Information Machine (MIM) learning. Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapse with VAEs. The recently proposed MIM framework encourages high mutual information between observations and latent variables, and is more robust against posterior collapse. This paper formulates a MIM model for text data, along with a corresponding learning algorithm. We demonstrate excellent perplexity (PPL) results on several datasets, and show that the framework learns a rich latent space, allowing for interpolation between sentences of different lengths with a fixed-dimensional latent representation. We also demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering, a transfer learning task, without fine-tuning. To the best of our knowledge, this is the first latent variable model (LVM) for text modelling that achieves competitive performance with non-LVM models.
[ { "type": "D", "before": "?", "after": null, "start_char_pos": 206, "end_char_pos": 207, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "D", "before": "?", "after": null, "start_char_pos": 335, "end_char_pos": 336, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] } ]
[ 0, 134, 380, 541, 637, 878, 1029 ]
arxiv
2003.02645
2
We introduce sentenceMIM, a probabilistic auto-encoder for language modelling , trained with Mutual Information Machine (MIM) learning . Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. The recently proposed MIM framework encourages high mutual information between observations and latent variables, and is more robust against posterior collapse. This paper formulates a MIM model for text data, along with a corresponding learning algorithm. We demonstrate excellent perplexity (PPL) results on several datasets, and show that the framework learns a rich latent space, allowing for interpolation between sentences of different lengths with a fixed-dimensional latent representation. We also demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering , a transfer learningtask , without fine-tuning . To the best of our knowledge, this is the first latent variable model (LVM) for text modelling that achieves competitive performance with non-LVM models .
SentenceMIM is a probabilistic auto-encoder for language data , trained with Mutual Information Machine (MIM) learning to provide a fixed length representation of variable length language observations (ie, similar to VAE) . Previous attempts to learn VAEs for language data faced challenges due to posterior collapse. MIM learning encourages high mutual information between observations and latent variables, and is robust against posterior collapse. As such, it learns informative representations whose dimension can be an order of magnitude higher than existing language VAEs. Importantly, the SentenceMIM loss has no hyper-parameters, simplifying optimization. We compare sentenceMIM with VAE, and AE on multiple datasets. SentenceMIM yields excellent reconstruction, comparable to AEs, with a rich structured latent space, comparable to VAEs. The structured latent representation is demonstrated with interpolation between sentences of different lengths . We demonstrate the versatility of sentenceMIM by utilizing a trained model for question-answering and transfer learning , without fine-tuning , outperforming VAE and AE with similar architectures .
[ { "type": "R", "before": "We introduce sentenceMIM,", "after": "SentenceMIM is", "start_char_pos": 0, "end_char_pos": 25, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "modelling", "after": "data", "start_char_pos": 68, "end_char_pos": 77, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "to provide a fixed length representation of variable length language observations (ie, similar to VAE)", "start_char_pos": 135, "end_char_pos": 135, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "variational auto-encoders", "after": "VAEs", "start_char_pos": 165, "end_char_pos": 190, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barrier being the occurrence of posterior collapsewith VAEs. The recently proposed MIM framework", "after": "faced challenges due to posterior collapse. MIM learning", "start_char_pos": 209, "end_char_pos": 414, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "D", "before": "more", "after": null, "start_char_pos": 500, "end_char_pos": 504, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "R", "before": "This paper formulates a MIM model for text data, along with a corresponding learning algorithm. We demonstrate excellent perplexity (PPL) results on several datasets, and show that the framework learns a rich", "after": "As such, it learns informative representations whose dimension can be an order of magnitude higher than existing language VAEs. Importantly, the SentenceMIM loss has no hyper-parameters, simplifying optimization. We compare sentenceMIM with VAE, and AE on multiple datasets. SentenceMIM yields excellent reconstruction, comparable to AEs, with a rich structured", "start_char_pos": 540, "end_char_pos": 748, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "allowing for", "after": "comparable to VAEs. The structured latent representation is demonstrated with", "start_char_pos": 763, "end_char_pos": 775, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "with a fixed-dimensional latent representation. We also", "after": ". We", "start_char_pos": 829, "end_char_pos": 884, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": ", a transfer learningtask", "after": "and transfer learning", "start_char_pos": 980, "end_char_pos": 1005, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] }, { "type": "R", "before": ". To the best of our knowledge, this is the first latent variable model (LVM) for text modelling that achieves competitive performance with non-LVM models", "after": ", outperforming VAE and AE with similar architectures", "start_char_pos": 1028, "end_char_pos": 1182, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 378, 539, 635, 876, 1029 ]
arxiv
2004.12316
1
Empathetic conversational models have been shown to improve user satisfaction and task outcomes in numerous domains. In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy. In addition, our empirical analysis also suggests that persona plays an important role in empathetic conversations . To this end, we propose a new task towards persona-based empathetic conversations and present the first empirical study on the impacts of persona on empathetic responding. Specifically, we first present a novel large-scale multi-domain dataset for persona-based empathetic conversations . We then propose CoBERT, an efficient BERT-based response selection model that obtains the state-of-the-art performance on our dataset. Finally, we conduct extensive experiments to investigate the impacts of persona on empathetic responding. Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic conversations than non-empathetic ones, establishing an empirical link between persona and empathy in human conversations .
Empathetic dialogue systems have been shown to improve user satisfaction and task outcomes in numerous domains. In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy. In addition, our empirical analysis also suggests that persona plays an important role in empathetic dialogues . To this end, we propose a new task to endow empathetic dialogue systems with personas and present the first empirical study on the impacts of persona on empathetic responding. Specifically, we first present a novel large-scale multi-domain dataset for empathetic dialogues with personas . We then propose CoBERT, an efficient BERT-based response selection model that obtains the state-of-the-art performance on our dataset. Finally, we conduct extensive experiments to investigate the impacts of persona on empathetic responding. Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic dialogues than non-empathetic ones, establishing an empirical link between persona and empathy in human dialogues .
[ { "type": "R", "before": "conversational models", "after": "dialogue systems", "start_char_pos": 11, "end_char_pos": 32, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "conversations", "after": "dialogues", "start_char_pos": 330, "end_char_pos": 343, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "towards persona-based empathetic conversations", "after": "to endow empathetic dialogue systems with personas", "start_char_pos": 381, "end_char_pos": 427, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "persona-based empathetic conversations", "after": "empathetic dialogues with personas", "start_char_pos": 594, "end_char_pos": 632, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "conversations", "after": "dialogues", "start_char_pos": 988, "end_char_pos": 1001, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "conversations", "after": "dialogues", "start_char_pos": 1096, "end_char_pos": 1109, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] } ]
[ 0, 116, 228, 345, 517, 634, 769, 875 ]
arxiv
2004.12316
2
Empathetic dialogue systems have been shown to improve user satisfaction and task outcomes in numerous domains. In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy. In addition, our empirical analysis also suggests that persona plays an important role in empathetic dialogues . To this end, we propose a new task to endow empathetic dialogue systems with personas and present the first empirical study on the impacts of persona on empathetic responding. Specifically, we first present a novel large-scale multi-domain dataset for empathetic dialogues with personas . We then propose CoBERT, an efficient BERT-based response selection model that obtains the state-of-the-art performance on our dataset. Finally, we conduct extensive experiments to investigate the impacts of persona on empathetic responding. Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic dialogues than non-empathetic ones, establishing an empirical link between persona and empathy in human dialogues .
Empathetic conversational models have been shown to improve user satisfaction and task outcomes in numerous domains. In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy. In addition, our empirical analysis also suggests that persona plays an important role in empathetic conversations . To this end, we propose a new task towards persona-based empathetic conversations and present the first empirical study on the impact of persona on empathetic responding. Specifically, we first present a novel large-scale multi-domain dataset for persona-based empathetic conversations . We then propose CoBERT, an efficient BERT-based response selection model that obtains the state-of-the-art performance on our dataset. Finally, we conduct extensive experiments to investigate the impact of persona on empathetic responding. Notably, our results show that persona improves empathetic responding more when CoBERT is trained on empathetic conversations than non-empathetic ones, establishing an empirical link between persona and empathy in human conversations .
[ { "type": "R", "before": "dialogue systems", "after": "conversational models", "start_char_pos": 11, "end_char_pos": 27, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "dialogues", "after": "conversations", "start_char_pos": 325, "end_char_pos": 334, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "to endow empathetic dialogue systems with personas", "after": "towards persona-based empathetic conversations", "start_char_pos": 372, "end_char_pos": 422, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "impacts", "after": "impact", "start_char_pos": 468, "end_char_pos": 475, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "style" ] }, { "type": "R", "before": "empathetic dialogues with personas", "after": "persona-based empathetic conversations", "start_char_pos": 589, "end_char_pos": 623, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "impacts", "after": "impact", "start_char_pos": 822, "end_char_pos": 829, "major_intent": "fluency", "raw_intents": [ "fluency", "style", "fluency" ] }, { "type": "R", "before": "dialogues", "after": "conversations", "start_char_pos": 979, "end_char_pos": 988, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "dialogues", "after": "conversations", "start_char_pos": 1083, "end_char_pos": 1092, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 111, 223, 336, 512, 625, 760, 866 ]
arxiv
2004.12765
1
Automatic humor detection has interesting use cases in modern technologies, such as chatbots and personal assistants. In this paper, we describe a novel approach for detecting humor in short texts using BERT sentence embedding. Our proposed model uses BERT to generate tokens and sentence embedding for texts. It sends embedding outputs as input to a two-layered neural networkthat predicts the target value. For evaluation , we created a new dataset for humor detection consisting of 200k formal short texts (100k positive , 100k negative). Experimental results show an accuracy of 98.1 percent for the proposed method , 2.1 percent improvement compared to the best CNN and RNN models and 1.1 percentbetter than a fine-tuned BERT model. In addition, the combination of RNN-CNN was not successful in this task compared to the CNN model .
Automatic humor detection has interesting use cases in modern technologies, such as chatbots and virtual assistants. Based on the general linguistic structure of humor, in this paper, we propose a novel approach for detecting humor in short texts by using BERT sentence embedding. Our proposed method uses BERT to generate embeddings for sentences of a given text and uses these embeddings as inputs for parallel lines of hidden layers in a neural network. These lines are finally concatenated to predict the target value. For evaluation purposes , we created a new dataset for humor detection consisting of 200k formal short texts (100k positive and 100k negative). Experimental results show that our proposed method can determine humor in short texts with accuracy and an F1-score of 98.2 percent. Our 8-layer model with 110M parameters outperforms all baseline models with a large margin, showing the importance of utilizing linguistic structure in machine learning models .
[ { "type": "R", "before": "personal assistants. In", "after": "virtual assistants. Based on the general linguistic structure of humor, in", "start_char_pos": 97, "end_char_pos": 120, "major_intent": "meaning-changed", "raw_intents": [ "style", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "describe", "after": "propose", "start_char_pos": 136, "end_char_pos": 144, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "A", "before": null, "after": "by", "start_char_pos": 197, "end_char_pos": 197, "major_intent": "fluency", "raw_intents": [ "fluency", "coherence", "fluency" ] }, { "type": "R", "before": "model", "after": "method", "start_char_pos": 242, "end_char_pos": 247, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "tokens and sentence embedding for texts. It sends embedding outputs as input to a two-layered neural networkthat predicts", "after": "embeddings for sentences of a given text and uses these embeddings as inputs for parallel lines of hidden layers in a neural network. These lines are finally concatenated to predict", "start_char_pos": 270, "end_char_pos": 391, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "purposes", "start_char_pos": 425, "end_char_pos": 425, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": ",", "after": "and", "start_char_pos": 526, "end_char_pos": 527, "major_intent": "fluency", "raw_intents": [ "coherence", "fluency", "fluency" ] }, { "type": "R", "before": "an accuracy of 98.1 percent for the proposed method , 2.1 percent improvement compared to the best CNN and RNN models and 1.1 percentbetter than a fine-tuned BERT model. In addition, the combination of RNN-CNN was not successful in this task compared to the CNN model", "after": "that our proposed method can determine humor in short texts with accuracy and an F1-score of 98.2 percent. Our 8-layer model with 110M parameters outperforms all baseline models with a large margin, showing the importance of utilizing linguistic structure in machine learning models", "start_char_pos": 570, "end_char_pos": 837, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] } ]
[ 0, 117, 228, 310, 409, 543, 739 ]
arxiv
2004.14519
2
Arabic is a morphological rich language, posing many challenges for information extraction (IE) tasks, including Named Entity Recognition (NER) , Part-of-Speech tagging (POS), Argument Role Labeling (ARL), and Relation Extraction (RE). A few multilingual pre-trained models have been proposed and show good performance for Arabic, however, most experiment results are reported on language understanding tasks, such as natural language inference, question answering and sentiment analysis. Their performance on the IEtasks is less known, in particular, the cross-lingual transfer capability from English to Arabic . In this work , we pre-train a Gigaword-based bilingual language model (GigaBERT) to study these two distant languages as well as zero-short transfer learningon various IE tasks. Our GigaBERT outperforms multilingual BERT and and monolingual AraBERT on these tasks, in both supervised and zero-shot learning settings. footnote We have made our pre-trained models publicly available at URL
Multilingual pre-trained Transformers, such as mBERT (Devlin et al., 2019) and XLM-RoBERTa (Conneau et al., 2020a), have been shown to enable the effective cross-lingual zero-shot transfer. However, their performance on Arabic information extraction (IE) tasks is not very well studied . In this paper , we pre-train a customized bilingual BERT, dubbed GigaBERT, that is designed specifically for Arabic NLP and English-to-Arabic zero-shot transfer learning. We study GigaBERT's effectiveness on zero-short transfer across four IE tasks: named entity recognition, part-of-speech tagging, argument role labeling, and relation extraction. Our best model significantly outperforms mBERT, XLM-RoBERTa, and AraBERT (Antoun et al., 2020) in both the supervised and zero-shot transfer settings. We have made our pre-trained models publicly available at URL
[ { "type": "R", "before": "Arabic is a morphological rich language, posing many challenges for information extraction (IE) tasks, including Named Entity Recognition (NER) , Part-of-Speech tagging (POS), Argument Role Labeling (ARL), and Relation Extraction (RE). A few multilingual", "after": "Multilingual", "start_char_pos": 0, "end_char_pos": 254, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "models have been proposed and show good performance for Arabic, however, most experiment results are reported on language understanding tasks, such as natural language inference, question answering and sentiment analysis. Their performance on the IEtasks is less known, in particular, the", "after": "Transformers, such as mBERT (Devlin et al., 2019) and XLM-RoBERTa (Conneau et al., 2020a), have been shown to enable the effective", "start_char_pos": 267, "end_char_pos": 555, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "transfer capability from English to Arabic", "after": "zero-shot transfer. However, their performance on Arabic information extraction (IE) tasks is not very well studied", "start_char_pos": 570, "end_char_pos": 612, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "work", "after": "paper", "start_char_pos": 623, "end_char_pos": 627, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "Gigaword-based bilingual language model (GigaBERT) to study these two distant languages as well as zero-short transfer learningon various IE tasks. Our GigaBERT outperforms multilingual BERT and and monolingual AraBERT on these tasks, in both", "after": "customized bilingual BERT, dubbed GigaBERT, that is designed specifically for Arabic NLP and English-to-Arabic zero-shot transfer learning. We study GigaBERT's effectiveness on zero-short transfer across four IE tasks: named entity recognition, part-of-speech tagging, argument role labeling, and relation extraction. Our best model significantly outperforms mBERT, XLM-RoBERTa, and AraBERT (Antoun et al., 2020) in both the", "start_char_pos": 645, "end_char_pos": 887, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "learning settings. footnote", "after": "transfer settings.", "start_char_pos": 913, "end_char_pos": 940, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] } ]
[ 0, 235, 488, 614, 792 ]
arxiv
2004.14601
1
We propose a novel methodology for analyzing the encoding of grammatical structure in neural language models through transfer learning. We test how a language model can leverage its internal representations to transfer knowledge across languages and symbol systems. We train LSTMs on non-linguistic , structured data and test their performance on human language to assess which kinds of data induce generalizable encodings that LSTMs can use for natural language. We find that models trained on structured data such as music and Java codehave internal representations that help in modelling human language, and that, surprisingly, adding minimal amounts of structure to the training data makes a large difference in transfer to natural language . Further experiments on transfer between human languages show that zero-shot performance on a test language is highly correlated with syntactic similarity to the training language, even after removing any vocabulary overlap. This suggests that the internal representations induced from natural languages are typologically coherent: they encode the features and differences outlined in typological studies . Our results provide insights into how neural networks represent linguistic structure, and also about the kinds of structural biases that give learners the ability to model language.
We propose transfer learning as a method for analyzing the encoding of grammatical structure in neural language models . We train LSTMs on non-linguistic data and evaluate their performance on natural language to assess which kinds of data induce generalizable structural features that LSTMs can use for natural language. We find that training on non-linguistic data with latent structure (MIDI music or Java code) improves test performance on natural language, despite no overlap in surface form or vocabulary. Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap. Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language as well as recursive languages do. Experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced from natural languages correspond to the cross-linguistic syntactic properties studied in linguistic typology . Our results provide insights into the ways that neural models represent abstract syntactic structure, and also about the kind of structural inductive biases which a learner needs to model language.
[ { "type": "R", "before": "a novel methodology", "after": "transfer learning as a method", "start_char_pos": 11, "end_char_pos": 30, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "through transfer learning. We test how a language model can leverage its internal representations to transfer knowledge across languages and symbol systems. We", "after": ". We", "start_char_pos": 109, "end_char_pos": 268, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": ", structured data and test", "after": "data and evaluate", "start_char_pos": 299, "end_char_pos": 325, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "human", "after": "natural", "start_char_pos": 347, "end_char_pos": 352, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "encodings", "after": "structural features", "start_char_pos": 413, "end_char_pos": 422, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "models trained on structured data such as music and Java codehave internal representations that help in modelling human language, and that, surprisingly, adding minimal amounts of structure to the training data makes a large difference in transfer to natural language . Further experiments", "after": "training on non-linguistic data with latent structure (MIDI music or Java code) improves test performance on natural language, despite no overlap in surface form or vocabulary. Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap. Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language as well as recursive languages do. Experiments", "start_char_pos": 477, "end_char_pos": 766, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": "human", "after": "natural", "start_char_pos": 787, "end_char_pos": 792, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "typological", "start_char_pos": 880, "end_char_pos": 880, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "coherence", "meaning-changed" ] }, { "type": "R", "before": "even after removing any vocabulary overlap. This suggests that the internal", "after": "suggesting that", "start_char_pos": 928, "end_char_pos": 1003, "major_intent": "clarity", "raw_intents": [ "fluency", "clarity", "clarity" ] }, { "type": "R", "before": "are typologically coherent: they encode the features and differences outlined in typological studies", "after": "correspond to the cross-linguistic syntactic properties studied in linguistic typology", "start_char_pos": 1051, "end_char_pos": 1151, "major_intent": "style", "raw_intents": [ "clarity", "style", "style" ] }, { "type": "R", "before": "how neural networks represent linguistic", "after": "the ways that neural models represent abstract syntactic", "start_char_pos": 1188, "end_char_pos": 1228, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "kinds of structural biases that give learners the ability", "after": "kind of structural inductive biases which a learner needs", "start_char_pos": 1259, "end_char_pos": 1316, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] } ]
[ 0, 135, 265, 463, 746, 971, 1153 ]
arxiv
2004.14601
2
We propose transfer learning as a method for analyzing the encoding of grammatical structure in neural language models. We train LSTMs on non-linguistic data and evaluate their performance on natural language to assess which kinds of data induce generalizable structural features that LSTMs can use for natural language. We find that training on non-linguistic data with latent structure (MIDI music or Java code) improves test performance on natural language, despite no overlap in surface form or vocabulary. Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap . Surprisingly, training on artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on natural language as well as recursive languages do. Experiments on transfer between natural languages show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced from natural languages correspond to the cross-linguistic syntactic properties studied in linguistic typology . Our results provide insights into the ways that neural models represent abstract syntactic structure, and also about the kind of structural inductive biases which a learner needs to model language .
We propose transfer learning as a method for analyzing the encoding of grammatical structure in neural language models. We train LSTMs on non-linguistic data and evaluate their performance on natural language to assess which kinds of data induce generalizable structural features that LSTMs can use for natural language. We find that training on non-linguistic data with latent structure (MIDI music or Java code) improves test performance on natural language, despite no overlap in surface form or vocabulary. To pinpoint the kinds of abstract structure that models may be encoding to lead to this improvement, we run similar experiments with two artificial parentheses languages: one which has a hierarchical recursive structure, and a control which has paired tokens but no recursion . Surprisingly, training a model on either of these artificial languages leads to the same substantial gains when testing on natural language . Further experiments on transfer between natural languages controlling for vocabulary overlap show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced by pre-training correspond to the cross-linguistic syntactic properties . Our results provide insights into the ways that neural models represent abstract syntactic structure, and also about the kind of structural inductive biases which allow for natural language acquisition .
[ { "type": "R", "before": "Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap", "after": "To pinpoint the kinds of abstract structure that models may be encoding to lead to this improvement, we run similar experiments with two artificial parentheses languages: one which has a hierarchical recursive structure, and a control which has paired tokens but no recursion", "start_char_pos": 511, "end_char_pos": 669, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "meaning-changed" ] }, { "type": "A", "before": null, "after": "a model on either of these artificial languages leads to the same substantial gains when testing", "start_char_pos": 695, "end_char_pos": 695, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "artificial languages consisting of sets of separated pairs of words, but with no recursion, improves performance on", "after": null, "start_char_pos": 699, "end_char_pos": 814, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": "as well as recursive languages do. Experiments", "after": ". Further experiments", "start_char_pos": 832, "end_char_pos": 878, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "controlling for vocabulary overlap", "start_char_pos": 917, "end_char_pos": 917, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "from natural languages", "after": "by pre-training", "start_char_pos": 1094, "end_char_pos": 1116, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "D", "before": "studied in linguistic typology", "after": null, "start_char_pos": 1173, "end_char_pos": 1203, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "a learner needs to model language", "after": "allow for natural language acquisition", "start_char_pos": 1369, "end_char_pos": 1402, "major_intent": "style", "raw_intents": [ "style", "clarity", "style" ] } ]
[ 0, 119, 320, 510, 866, 1205 ]
arxiv
2004.14623
2
In adversarial testing, we pose hard generalization tasks in order to gain insights into the solutions found by our models. What properties must a system have in order to succeed at these hard behavioral tasks? We argue that an essential factor is modular internal structure. Our central contribution is a new experimental method called 'interchange interventions', in which systematic manipulations of model-internal states are related to causal effects on their outputs, thereby allowing us to identify modular structure. Our work is grounded empirically in a new challenge Natural Language Inference dataset designed to assess systems on their ability to reason about entailment and negation. We find that a BERT model is strikingly successful at the systematic generalization task we pose using this dataset , and our active manipulations of model-internal vectors help us understand why: despite the densely interconnected nature of the BERT architecture, the learned model embeds modular, general theories of lexical entailment relations .
We address whether neural models for Natural Language Inference (NLI) can learn the compositional interactions between lexical entailment and negation, using four methods: the behavioral evaluation methods of (1) challenge test sets and (2) systematic generalization tasks, and the structural evaluation methods of (3) probes and (4) interventions. To facilitate this holistic evaluation, we present Monotonicity NLI (MoNLI), a new naturalistic dataset focused on lexical entailment and negation. In our behavioral evaluations, we find that models trained on general-purpose NLI datasets fail systematically on MoNLI examples containing negation, but that MoNLI fine-tuning addresses this failure. In our structural evaluations, we look for evidence that our top-performing BERT-based model has learned to implement the monotonicity algorithm behind MoNLI. Probes yield evidence consistent with this conclusion , and our intervention experiments bolster this, showing that the causal dynamics of the model mirror the causal dynamics of this algorithm on subsets of MoNLI. This suggests that the BERT model at least partially embeds a theory of lexical entailment and negation at an algorithmic level .
[ { "type": "R", "before": "In adversarial testing, we pose hard generalization tasks in order to gain insights into the solutions found by our models. What properties must a system have in order to succeed at these hard behavioral tasks? We argue that an essential factor is modular internal structure. Our central contribution is a new experimental method called 'interchange interventions', in which systematic manipulations of model-internal states are related to causal effects on their outputs, thereby allowing us to identify modular structure. Our work is grounded empirically in a new challenge Natural Language Inference dataset designed to assess systems on their ability to reason about", "after": "We address whether neural models for Natural Language Inference (NLI) can learn the compositional interactions between lexical entailment and negation, using four methods: the behavioral evaluation methods of (1) challenge test sets and (2) systematic generalization tasks, and the structural evaluation methods of (3) probes and (4) interventions. To facilitate this holistic evaluation, we present Monotonicity NLI (MoNLI), a new naturalistic dataset focused on lexical", "start_char_pos": 0, "end_char_pos": 670, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "We find that a BERT model is strikingly successful at the systematic generalization task we pose using this dataset", "after": "In our behavioral evaluations, we find that models trained on general-purpose NLI datasets fail systematically on MoNLI examples containing negation, but that MoNLI fine-tuning addresses this failure. In our structural evaluations, we look for evidence that our top-performing BERT-based model has learned to implement the monotonicity algorithm behind MoNLI. Probes yield evidence consistent with this conclusion", "start_char_pos": 696, "end_char_pos": 811, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "active manipulations of model-internal vectors help us understand why: despite the densely interconnected nature of", "after": "intervention experiments bolster this, showing that the causal dynamics of", "start_char_pos": 822, "end_char_pos": 937, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "BERT architecture, the learned model embeds modular, general theories", "after": "model mirror the causal dynamics of this algorithm on subsets of MoNLI. This suggests that the BERT model at least partially embeds a theory", "start_char_pos": 942, "end_char_pos": 1011, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "relations", "after": "and negation at an algorithmic level", "start_char_pos": 1034, "end_char_pos": 1043, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "style" ] } ]
[ 0, 123, 210, 275, 523, 695 ]
arxiv
2004.14974
1
We introduce the task of scientific fact-checking. Given a corpus of scientific articles and a claim about a scientific finding, a fact-checking model must identify abstracts that support or refute the claim. In addition, it must provide rationales for its predictions in the form of evidentiary sentences from the retrieved abstracts. For this task, we introduce SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts , and annotated with labels and rationales. We present a baseline model and assess its performance on SciFact. We observe that, while fact-checking models trained on Wikipedia articles or political news have difficulty generalizing to our task, simple domain adaptation techniques represent a promising avenue for improvement. Finally, we provide initial results showing how our model can be used to verify claims relevant to COVID-19 on the CORD-19 corpus. Our dataset will be made publicly available at URL
We introduce scientific claim verification, a new task to select abstracts from the research literature containing evidence that supports or refutes a given scientific claim, and to identify rationales justifying each decision. To study this task, we construct SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts annotated with labels and rationales. We develop baseline models for SciFact, and demonstrate that these models benefit from combined training on a large dataset of claims about Wikipedia articles, together with the new SciFact data. We show that our claim verification system is able to identify plausible evidence for 23 / 36 claims relevant to COVID-19 on the CORD-19 corpus. Our results and experiments strongly suggest that our new task and data will support significant future research efforts.
[ { "type": "R", "before": "the task of scientific fact-checking. Given a corpus of scientific articles and a claim about a scientific finding, a fact-checking model must identify abstracts that support or refute the claim. In addition, it must provide rationales for its predictions in the form of evidentiary sentences from the retrieved abstracts. For", "after": "scientific claim verification, a new task to select abstracts from the research literature containing evidence that supports or refutes a given scientific claim, and to identify rationales justifying each decision. To study", "start_char_pos": 13, "end_char_pos": 339, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "introduce", "after": "construct", "start_char_pos": 354, "end_char_pos": 363, "major_intent": "style", "raw_intents": [ "clarity", "style", "style" ] }, { "type": "D", "before": ", and", "after": null, "start_char_pos": 466, "end_char_pos": 471, "major_intent": "coherence", "raw_intents": [ "clarity", "coherence", "coherence" ] }, { "type": "R", "before": "present a baseline model and assess its performance on SciFact. We observe that, while fact-checking models trained on Wikipedia articles or political news have difficulty generalizing to our task, simple domain adaptation techniques represent a promising avenue for improvement. Finally, we provide initial results showing how our model can be used to verify", "after": "develop baseline models for SciFact, and demonstrate that these models benefit from combined training on a large dataset of claims about Wikipedia articles, together with the new SciFact data. We show that our claim verification system is able to identify plausible evidence for 23 / 36", "start_char_pos": 513, "end_char_pos": 872, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "dataset will be made publicly available at URL", "after": "results and experiments strongly suggest that our new task and data will support significant future research efforts.", "start_char_pos": 928, "end_char_pos": 974, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] } ]
[ 0, 50, 208, 335, 509, 576, 792, 923 ]
arxiv
2004.15003
1
One key principle for assessing semantic similarity between texts is to measure the degree of semantic overlap of them by considering word-by-word alignment. However, alignment-based approaches are inferior to the generic sentence vectorsin terms of performance. We hypothesize that the reason for the inferiority of alignment-based methods is due to the fact that they do not distinguish word importance and word meaning. To solve this , we propose to separate word importance and word meaning by decomposing word vectors into their norm and direction , then compute the alignment-based similarity with the help of earth mover's distance . We call the method word rotator's distance (WRD) because direction vectors are aligned by rotation on the unit hypersphere. In addition, to incorporate the advance of cutting edge additive sentence encoders, we propose to re-decompose such sentence vectors into word vectors and use them as inputs to WRD. Empirically, the proposed method outperforms current methods considering the word-by-word alignment including word mover's distance with a big difference; moreover, our method outperforms state-of-the-art additive sentence encoders on the most competitive dataset, STS-benchmark .
One key principle for assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment. Such alignment-based approaches are both intuitive and interpretable; however, they are empirically inferior to the simple cosine similarity between general-purpose sentence vectors. To remedy this, we focus on the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity. Alignment-based approaches do not distinguish the norm and direction, whereas sentence-vector approaches automatically use the norm as the word importance. Accordingly , we propose to decouple word vectors into their norm and direction then computing the alignment-based similarity using earth mover's distance (optimal transport cost), which we refer to as word rotator's distance . Furthermore, we demonstrate how to grow the norm and direction of word vectors (vector converter); this is a new systematic approach derived from the sentence-vector estimation methods, which can significantly improve the performance of the proposed method. On several STS benchmarks, our simple proposed methods outperformed not only alignment-based approaches but also strong baselines .
[ { "type": "R", "before": "semantic similarity between texts is to measure", "after": "textual similarity is measuring", "start_char_pos": 32, "end_char_pos": 79, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "of them by considering word-by-word alignment. However,", "after": "between two texts by considering the word alignment. Such", "start_char_pos": 111, "end_char_pos": 166, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": "are", "after": "are both intuitive and interpretable; however, they are empirically", "start_char_pos": 194, "end_char_pos": 197, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "style", "meaning-changed" ] }, { "type": "R", "before": "generic sentence vectorsin terms of performance. We hypothesize that the reason for the inferiority of alignment-based methods is due to the fact that they", "after": "simple cosine similarity between general-purpose sentence vectors. To remedy this, we focus on the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity. Alignment-based approaches", "start_char_pos": 214, "end_char_pos": 369, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "word importance and word meaning. To solve this", "after": "the norm and direction, whereas sentence-vector approaches automatically use the norm as the word importance. Accordingly", "start_char_pos": 389, "end_char_pos": 436, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "separate word importance and word meaning by decomposing word", "after": "decouple word", "start_char_pos": 453, "end_char_pos": 514, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": ", then compute", "after": "then computing", "start_char_pos": 553, "end_char_pos": 567, "major_intent": "clarity", "raw_intents": [ "clarity", "fluency", "clarity" ] }, { "type": "R", "before": "with the help of", "after": "using", "start_char_pos": 599, "end_char_pos": 615, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": ". We call the method", "after": "(optimal transport cost), which we refer to as", "start_char_pos": 639, "end_char_pos": 659, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "clarity" ] }, { "type": "R", "before": "(WRD) because direction vectors are aligned by rotation on the unit hypersphere. In addition, to incorporate the advance of cutting edge additive sentence encoders, we propose to re-decompose such sentence vectors into word vectors and use them as inputs to WRD. Empirically, the proposed method outperforms current methods considering the word-by-word alignment including word mover's distance with a big difference; moreover, our method outperforms state-of-the-art additive sentence encoders on the most competitive dataset, STS-benchmark", "after": ". Furthermore, we demonstrate how to grow the norm and direction of word vectors (vector converter); this is a new systematic approach derived from the sentence-vector estimation methods, which can significantly improve the performance of the proposed method. On several STS benchmarks, our simple proposed methods outperformed not only alignment-based approaches but also strong baselines", "start_char_pos": 684, "end_char_pos": 1225, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] } ]
[ 0, 157, 262, 422, 640, 764, 946, 1101 ]
arxiv
2004.15003
2
One key principle for assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment. Such alignment-based approaches are both intuitive and interpretable; however, they are empirically inferior to the simple cosine similarity between general-purpose sentence vectors. To remedy this , we focus on the fact that the norm of word vectors is a good proxy for word importance, and the angle of them is a good proxy for word similarity. Alignment-based approaches do not distinguish the norm and direction , whereas sentence-vector approaches automatically use the norm as the word importance. Accordingly, we propose to decouple word vectors into their norm and direction then computing the alignment-based similarity using earth mover's distance ( optimal transport cost), which we refer to as word rotator's distance. Furthermore, we demonstrate how to grow the norm and direction of word vectors (vector converter) ; this is a new systematic approach derived from the sentence-vector estimation methods , which can significantly improve the performance of the proposed method . On several STS benchmarks, our simple proposed methods outperformed not only alignment-based approaches but also strong baselines.
A key principle in assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment. Such alignment-based approaches are intuitive and interpretable; however, they are empirically inferior to the simple cosine similarity between general-purpose sentence vectors. To address this issue , we focus on and demonstrate the fact that the norm of word vectors is a good proxy for word importance, and their angle is a good proxy for word similarity. Alignment-based approaches do not distinguish them , whereas sentence-vector approaches automatically use the norm as the word importance. Accordingly, we propose a method that first decouples word vectors into their norm and direction , and then computes alignment-based similarity using earth mover's distance ( i.e., optimal transport cost), which we refer to as word rotator's distance. Besides, we find how to grow the norm and direction of word vectors (vector converter) , which is a new systematic approach derived from sentence-vector estimation methods . On several textual similarity datasets, the combination of these simple proposed methods outperformed not only alignment-based approaches but also strong baselines. The source code is available at URL
[ { "type": "R", "before": "One key principle for", "after": "A key principle in", "start_char_pos": 0, "end_char_pos": 21, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "D", "before": "both", "after": null, "start_char_pos": 184, "end_char_pos": 188, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "R", "before": "remedy this", "after": "address this issue", "start_char_pos": 334, "end_char_pos": 345, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "A", "before": null, "after": "and demonstrate", "start_char_pos": 360, "end_char_pos": 360, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": "the angle of them", "after": "their angle", "start_char_pos": 441, "end_char_pos": 458, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "the norm and direction", "after": "them", "start_char_pos": 542, "end_char_pos": 564, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "to decouple", "after": "a method that first decouples", "start_char_pos": 677, "end_char_pos": 688, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "then computing the", "after": ", and then computes", "start_char_pos": 732, "end_char_pos": 750, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "i.e.,", "start_char_pos": 809, "end_char_pos": 809, "major_intent": "coherence", "raw_intents": [ "coherence", "fluency", "coherence" ] }, { "type": "R", "before": "Furthermore, we demonstrate", "after": "Besides, we find", "start_char_pos": 881, "end_char_pos": 908, "major_intent": "coherence", "raw_intents": [ "clarity", "coherence", "coherence" ] }, { "type": "R", "before": "; this", "after": ", which", "start_char_pos": 979, "end_char_pos": 985, "major_intent": "coherence", "raw_intents": [ "clarity", "coherence", "coherence" ] }, { "type": "D", "before": "the", "after": null, "start_char_pos": 1028, "end_char_pos": 1031, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] }, { "type": "D", "before": ", which can significantly improve the performance of the proposed method", "after": null, "start_char_pos": 1067, "end_char_pos": 1139, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, { "type": "R", "before": "STS benchmarks, our", "after": "textual similarity datasets, the combination of these", "start_char_pos": 1153, "end_char_pos": 1172, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "A", "before": null, "after": "The source code is available at URL", "start_char_pos": 1273, "end_char_pos": 1273, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 147, 217, 330, 495, 652, 880, 980, 1141 ]
arxiv
2004.15011
1
We introduce TLDR generation for scientific papers, a new automatic summarization task with high source compression requiring expert background knowledge and complex language understanding. To facilitate research on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs. Furthermore, we introduce a novel annotation protocol for scalably curating additional gold summaries by rewriting peer review comments. We use this protocol to augment our test set, yielding multiple gold TLDRs for evaluation, which is unlike most recent summarization datasets that assume only one valid gold summary. We present a training strategy for adapting pretrained language models that exploits similarities between TLDR generation and the related tasks of extreme summarization and title generation, which outperforms strong extractive and abstractive summarization baselines.
We introduce TLDR generation for scientific papers, a new automatic summarization task with high source compression , requiring expert background knowledge and complex language understanding. To facilitate research on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs. Furthermore, we introduce a novel annotation protocol for scalably curating additional gold summaries by rewriting peer review comments. We use this protocol to augment our test set, yielding multiple gold TLDRs for evaluation, which is unlike most recent summarization datasets that assume only one valid gold summary. We present a training strategy for adapting pretrained language models that exploits similarities between TLDR generation and the related task of title generation, which outperforms strong extractive and abstractive summarization baselines.
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 116, "end_char_pos": 116, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "tasks of extreme summarization and", "after": "task of", "start_char_pos": 733, "end_char_pos": 767, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] } ]
[ 0, 190, 274, 411, 594 ]
arxiv
2004.15011
2
We introduce TLDR generation for scientific papers , a new automatic summarizationtask with high source compression , requiring expert background knowledge and complex language understanding . To facilitate research on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs . Furthermore, we introduce a novel annotation protocol for scalably curating additional gold summaries by rewriting peer review comments. We use this protocol to augment our test set, yielding multiple gold TLDRs for evaluation, which is unlike most recent summarization datasets that assume only one valid gold summary. We present a training strategy for adapting pretrained language models that exploits similarities between TLDR generation and the related task of title generation, which outperforms strong extractive and abstractive summarization baselines .
We introduce TLDR generation , a new form of extreme summarization, for scientific papers. TLDR generation involves high source compression and requires expert background knowledge and understanding of complex domain-specific language . To facilitate study on this task, we introduce SciTLDR, a new multi-target dataset of 5.4K TLDRs over 3.2K papers. SciTLDR contains both author-written and expert-derived TLDRs, where the latter are collected using a novel annotation protocol that produces high-quality summaries while minimizing annotation burden. We propose CATTS, a simple yet effective learning strategy for generating TLDRs that exploits titles as an auxiliary training signal. CATTS improves upon strong baselines under both automated metrics and human evaluations. Data and code are publicly available at URL
[ { "type": "D", "before": "for scientific papers", "after": null, "start_char_pos": 29, "end_char_pos": 50, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "automatic summarizationtask with", "after": "form of extreme summarization, for scientific papers. TLDR generation involves", "start_char_pos": 59, "end_char_pos": 91, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": ", requiring", "after": "and requires", "start_char_pos": 116, "end_char_pos": 127, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "clarity" ] }, { "type": "R", "before": "complex language understanding", "after": "understanding of complex domain-specific language", "start_char_pos": 160, "end_char_pos": 190, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": "research", "after": "study", "start_char_pos": 207, "end_char_pos": 215, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "dataset of 3.9K TLDRs . Furthermore, we introduce", "after": "new multi-target dataset of 5.4K TLDRs over 3.2K papers. SciTLDR contains both author-written and expert-derived TLDRs, where the latter are collected using", "start_char_pos": 254, "end_char_pos": 303, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "for scalably curating additional gold summaries by rewriting peer review comments. We use this protocol to augment our test set, yielding multiple gold TLDRs for evaluation, which is unlike most recent summarization datasets that assume only one valid gold summary. We present a training strategy for adapting pretrained language models that exploits similarities between TLDR generation and the related task of title generation, which outperforms strong extractive and abstractive summarization baselines .", "after": "that produces high-quality summaries while minimizing annotation burden. We propose CATTS, a simple yet effective learning strategy for generating TLDRs that exploits titles as an auxiliary training signal. CATTS improves upon strong baselines under both automated metrics and human evaluations. Data and code are publicly available at URL", "start_char_pos": 332, "end_char_pos": 839, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] } ]
[ 0, 192, 414, 597 ]
arxiv
2005.00192
2
In the automatic evaluation of generative question answering (GenQA) systems, it is difficult to assess the correctness of generated answers due to the free-form of the answer. Moreover, there is a lack of benchmark datasets to evaluate the suitability of existing metrics in terms of correctness. To study a better metric for GenQA, we first create high-quality human judgments of correctness on two standard GenQA datasets. Using our human-evaluation datasets, we show that widely used n-gram similarity metrics do not correlate with human judgments . To alleviate this problem, we propose a new metric for evaluating the correctness of GenQA. Specifically, our new metric assigns different weights to each token via keyphrase prediction, thereby judging whether a generated answer sentence captures the key meaning of the reference answer. Our proposed metric shows a significantly higher correlation with human judgments than existing metrics in various datasets.
In the automatic evaluation of generative question answering (GenQA) systems, it is difficult to assess the correctness of generated answers due to the free-form of the answer. Especially, widely used n-gram similarity metrics often fail to discriminate the incorrect answers since they equally consider all of the tokens . To alleviate this problem, we propose KPQA-metric, a new metric for evaluating the correctness of GenQA. Specifically, our new metric assigns different weights to each token via keyphrase prediction, thereby judging whether a generated answer sentence captures the key meaning of the reference answer. To evaluate our metric, we create high-quality human judgments of correctness on two GenQA datasets. Using our human-evaluation datasets, we show that our proposed metric has a significantly higher correlation with human judgments than existing metrics . The code is available at URL
[ { "type": "R", "before": "Moreover, there is a lack of benchmark datasets to evaluate the suitability of existing metrics in terms of correctness. To study a better metric for GenQA, we first create high-quality human judgments of correctness on two standard GenQA datasets. Using our human-evaluation datasets, we show that", "after": "Especially,", "start_char_pos": 177, "end_char_pos": 475, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": "do not correlate with human judgments", "after": "often fail to discriminate the incorrect answers since they equally consider all of the tokens", "start_char_pos": 514, "end_char_pos": 551, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "KPQA-metric,", "start_char_pos": 592, "end_char_pos": 592, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "Our proposed metric shows", "after": "To evaluate our metric, we create high-quality human judgments of correctness on two GenQA datasets. Using our human-evaluation datasets, we show that our proposed metric has", "start_char_pos": 844, "end_char_pos": 869, "major_intent": "meaning-changed", "raw_intents": [ "coherence", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "in various datasets.", "after": ". The code is available at URL", "start_char_pos": 948, "end_char_pos": 968, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 176, 297, 425, 553, 646, 843 ]
arxiv
2005.00782
1
Pre-trained language models (PTLM) have greatly improved performance on commonsense inference benchmarks, however, it remains unclear whether they share a human's ability to consistently make correct inferences under perturbations. Prior studies of PTLMs have found inference deficits, but have failed to provide a systematic means of understanding whether these deficits are due to low inference abilities or poor inference robustness . In this work, we address this gap by developing a procedure that allows for the systematized probing of both PTLMs' inference abilities and robustness. Our procedure centers around the methodical creation of logically-equivalent, but syntactically-different sets of probes, of which we create a corpus of 14,400 probes coming from 60 logically-equivalent sets that can be used to probe PTLMs in three task settings. We find that despite the recent success of large PTLMs on commonsense benchmarks, their performances on our probes are no better than random guessing (even with fine-tuning) and are heavily dependent on biases--the poor overall performance, unfortunately, inhibits us from studying robustness. We hope our approach and initial probe set will assist future work in improving PTLMs' inference abilities , while also providing a probing set to test robustness under several linguistic variations--code and data will be released .
Pre-trained language models (PTLM) have impressive performance on commonsense inference benchmarks, but their ability to practically employ commonsense to communicate with humans is fiercely debated. Prior evaluations of PTLMs have focused on factual world knowledge or the ability to reason when the necessary knowledge is provided explicitly. However, effective communication with humans requires inferences based on implicit commonsense relationships, and robustness despite paraphrasing. In the pursuit of advancing fluid human-AI communication, we propose a new challenge, RICA, that evaluates the capabilities of making commonsense inferences and the robustness of these inferences to language variations. In our work, we develop a systematic procedure to probe PTLMs across three different evaluation settings. Extensive experiments on our generated probe sets show that PTLMs perform no better than random guessing (even with fine-tuning) , are heavily impacted by statistical biases, and are not robust to perturbation attacks. Our framework and probe sets can help future work improve PTLMs' inference abilities and robustness to linguistic variations--bringing us closer to more fluid communication .
[ { "type": "R", "before": "greatly improved", "after": "impressive", "start_char_pos": 40, "end_char_pos": 56, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "however, it remains unclear whether they share a human's ability to consistently make correct inferences under perturbations. Prior studies", "after": "but their ability to practically employ commonsense to communicate with humans is fiercely debated. Prior evaluations", "start_char_pos": 106, "end_char_pos": 245, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "found inference deficits, but have failed to provide a systematic means of understanding whether these deficits are due to low inference abilities or poor inference robustness . In this work, we address this gap by developing a procedure that allows for the systematized probing of both PTLMs' inference abilities and robustness. Our procedure centers around the methodical creation of logically-equivalent, but syntactically-different sets of probes, of which we create a corpus of 14,400 probes coming from 60 logically-equivalent sets that can be used", "after": "focused on factual world knowledge or the ability to reason when the necessary knowledge is provided explicitly. However, effective communication with humans requires inferences based on implicit commonsense relationships, and robustness despite paraphrasing. In the pursuit of advancing fluid human-AI communication, we propose a new challenge, RICA, that evaluates the capabilities of making commonsense inferences and the robustness of these inferences to language variations. In our work, we develop a systematic procedure", "start_char_pos": 260, "end_char_pos": 814, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "in three task settings. We find that despite the recent success of large PTLMs on commonsense benchmarks, their performances on our probes are", "after": "across three different evaluation settings. Extensive experiments on our generated probe sets show that PTLMs perform", "start_char_pos": 830, "end_char_pos": 972, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "and are heavily dependent on biases--the poor overall performance, unfortunately, inhibits us from studying robustness. We hope our approach and initial probe set will assist future work in improving", "after": ", are heavily impacted by statistical biases, and are not robust to perturbation attacks. Our framework and probe sets can help future work improve", "start_char_pos": 1028, "end_char_pos": 1227, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "coherence" ] }, { "type": "R", "before": ", while also providing a probing set to test robustness under several linguistic variations--code and data will be released", "after": "and robustness to linguistic variations--bringing us closer to more fluid communication", "start_char_pos": 1255, "end_char_pos": 1378, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 231, 437, 589, 853, 1147 ]
null
2005.00782
2
Pre-trained language models ( PTLM) have impressive performance on commonsense inference benchmarks, but their ability to practically employ commonsense to communicate with humans is fiercely debated. Prior evaluations of PTLMs have focused on factual world knowledge or the ability to reason when the necessary knowledge is provided explicitly. However, effective communication with humansrequires inferences based on implicit commonsense relationships, and robustness despite paraphrasing . In the pursuit of advancing fluid human-AI communication, we propose a new challenge, RICA , that evaluates the capabilities of making commonsense inferences and the robustness of these inferences to language variations. In our work , we develop a systematic procedure to probe PTLMs across three different evaluation settings. Extensive experiments on our generated probe sets show that PTLMs perform no better than random guessing (even with fine-tuning) , are heavily impacted by statistical biases, and are not robust to perturbation attacks. Our framework and probe sets can help future work improve PTLMs ' inference abilities and robustness to linguistic variations--bringing us closer to more fluid communication .
Pre-trained language models ( PTLMs) have achieved impressive performance on commonsense inference benchmarks, but their ability to employ commonsense to make robust inferences, which is crucial for effective communications with humans, is debated . In the pursuit of advancing fluid human-AI communication, we propose a new challenge, RICA : Robust Inference capability based on Commonsense Axioms , that evaluates robust commonsense inference despite textual perturbations. To generate data for this challenge , we develop a systematic and scalable procedure using commonsense knowledge bases and probe PTLMs across two different evaluation settings. Extensive experiments on our generated probe sets with more than 10k statements show that PTLMs perform no better than random guessing on the zero-shot setting , are heavily impacted by statistical biases, and are not robust to perturbation attacks. We also find that fine-tuning on similar statements offer limited gains, as PTLMs still fail to generalize to unseen inferences. Our new large-scale benchmark exposes a significant gap between PTLMs and human-level language understanding and offers a new challenge for PTLMs to demonstrate commonsense .
[ { "type": "R", "before": "PTLM) have", "after": "PTLMs) have achieved", "start_char_pos": 30, "end_char_pos": 40, "major_intent": "style", "raw_intents": [ "clarity", "style", "style" ] }, { "type": "D", "before": "practically", "after": null, "start_char_pos": 122, "end_char_pos": 133, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "communicate with humans is fiercely debated. Prior evaluations of PTLMs have focused on factual world knowledge or the ability to reason when the necessary knowledge is provided explicitly. However, effective communication with humansrequires inferences based on implicit commonsense relationships, and robustness despite paraphrasing", "after": "make robust inferences, which is crucial for effective communications with humans, is debated", "start_char_pos": 156, "end_char_pos": 490, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": ": Robust Inference capability based on Commonsense Axioms", "start_char_pos": 584, "end_char_pos": 584, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "the capabilities of making commonsense inferences and the robustness of these inferences to language variations. In our work", "after": "robust commonsense inference despite textual perturbations. To generate data for this challenge", "start_char_pos": 602, "end_char_pos": 726, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "procedure to", "after": "and scalable procedure using commonsense knowledge bases and", "start_char_pos": 753, "end_char_pos": 765, "major_intent": "meaning-changed", "raw_intents": [ "clarity", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "three", "after": "two", "start_char_pos": 785, "end_char_pos": 790, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "with more than 10k statements", "start_char_pos": 872, "end_char_pos": 872, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "(even with fine-tuning)", "after": "on the zero-shot setting", "start_char_pos": 928, "end_char_pos": 951, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "R", "before": "Our framework and probe sets can help future work improve PTLMs ' inference abilities and robustness to linguistic variations--bringing us closer to more fluid communication", "after": "We also find that fine-tuning on similar statements offer limited gains, as PTLMs still fail to generalize to unseen inferences. Our new large-scale benchmark exposes a significant gap between PTLMs and human-level language understanding and offers a new challenge for PTLMs to demonstrate commonsense", "start_char_pos": 1042, "end_char_pos": 1215, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] } ]
[ 0, 200, 345, 492, 714, 821, 1041 ]
null

Paper: Understanding Iterative Revision from Human-Written Text

Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang

Github repo: https://github.com/vipulraheja/IteraTeR

Downloads last month
69
Edit dataset card