id,sentence1,sentence2,label test_0,"On one hand, assuming that the parallel sentences have the same meaning, many datasets find the aligned sentences to have higher string overlap (as measured by BLEU).","the two sentences should have different styles, and may vary a lot in expressions: and thus leading to a lower BLEU.",contrasting test_1,"Recent solutions (Gidaris and Komodakis, 2018) leverage a memory component to maintain models' learning experience, e.g., by finding from a supervised stage the content that is similar to the unseen classes, leading to the state-of-the-art performance.",the memory weights are static during inference and the capability of the model is still limited when adapted to new classes.,contrasting test_2,"A larger KES leads to predict more unique keyphrases, append less absolutely incorrect keyphrases and improve the chance to output more accurate keyphrases.","generating more unique keyphrases may also lead to more incorrect predictions, which will degrade the F 1 @M scores since F 1 @M considers all the unique predictions without a fixed cutoff.",contrasting test_3,"For instance, all the baselines produce the keyphrase ""debugging"" at least three times.","our ExHiRD-h only generates it once, which demonstrates that our proposed method is more powerful in avoiding duplicated keyphrases.",contrasting test_4,Those models mainly focus on improving decoders based on the constraint of hierarchical paths.,"we propose an effective hierarchy-aware global model, HiAGM, that extracts label-wise text features with hierarchy encoders based on prior hierarchy information.",contrasting test_5,Previous global models classify labels upon the original textual information and improve the decoder with predefined hierarchy paths.,we construct a novel end-to-end hierarchy-aware global model (HiAGM) for the mutual interaction of text features and label correlations.,contrasting test_6,"As for the amount of operators' effort, we observed a slight decrease in HTER with the increase of pre-filtering conditions, indicating an improvement in the quality of candidates.","hTER scores were all between 0.1 and 0.2, much below the 0.4 acceptability threshold defined by Turchi et al.",contrasting test_7,"Finally, we observe that despite reducing the ouput diversity and novelty, the reduction of expert effort by Reviewer≥2 in terms of the percentage of the obtained pairs is not attainable by a machine yet.","automatic filtering (Reviewer machine ) is a viable solution since (i) it helps the NGO operators save time better than human filter ≥1, (ii) it preserves diversity and novelty better than Reviewer≥2 and in line with Reviewer≥1 .",contrasting test_8,"In this scenario, automation strategies, such as natural language generation, are necessary to help NGO operators in their countering effort.","these automation approaches are not mature yet, since they suffer from the lack of sufficient amount of quality data and tend to produce generic/repetitive responses.",contrasting test_9,"However, gold data for the target language (stage) is usually inaccessible, often preventing evaluation against human judgment.",we here propose several alternative evaluation set-ups as an integral part of our methodology.,contrasting test_10,"Words, such as German Sonnenschein for which a translational equivalent already exists in the Source (""sunshine""; see Figure 1), mainly rely on translation, while the prediction step acts as an optional refinement procedure.","the prediction step is crucial for words, such as Erdbeben, whose translational equivalents (""earthquake"") are missing in the Source.",contrasting test_11,We want to point out that not every single entry should be considered meaningful because of noise in the embedding vocabulary caused by typos and tokenization errors.,"choosing the ""best"" size for an emotion lexicon necessarily translates into a quality-coverage trade-off for which there is no general solution.",contrasting test_12,Neural MT (NMT) approaches have certainly made significant progress in this direction.,the diversity of possible outcomes makes it harder to evaluate MT models.,contrasting test_13,"Dreyer and Marcu (2012) showed that if multiple human translations are used, any automatic MT evaluation metric achieves a substantially higher correlation with human judgments.",multiple translations are hardly ever available in practice due to the cost of collecting them.,contrasting test_14,"The distinction between intended and perceived sarcasm, also referred to as encoded and decoded sarcasm, respectively, has been pointed out in previous research (Kaufer, 1981;Rockwell and Theriot, 2001).",it has not been considered in a computational context thus far when building datasets for textual sarcasm detection.,contrasting test_15,"Text summarization has recently received increased attention with the rise of deep learning-based endto-end models, both for extractive and abstractive variants.","so far, only single-document summarization has profited from this trend.",contrasting test_16,"Apparently, a summarization method is desirable to achieve a ROUGE score of 100, i.e., a system output is identical to the reference.",this is an unrealistic goal for the task setting on the Gigaword dataset.,contrasting test_17,"They addressed the problem where an abstractive model made mistakes in facts (e.g., tuples of subjects, predicates, and objects).",we also regularly see examples where the abstractive model generates unexpected words.,contrasting test_18,"For JNC, we use the pretrained BERT model for Japanese text (Kikuta, 2019).",no large-scale Japanese corpus for semantic inference (counterpart to MultiNLI) is available.,contrasting test_19,"We could confirm the improvements from the support scores, entailment ratio, and human judgments.",the ROUGE scores between system and reference headlines did not indicate a clear difference.,contrasting test_20,"Likert-score based self-reported user rating is widely adopted by social conversational systems, such as Amazon Alexa Prize chatbots.",selfreported user rating suffers from bias and variance among different users.,contrasting test_21,"ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), and XLNet (Yang et al., 2019) have achieved great success in many NLP tasks.",it is difficult to apply them in the industrial dialog system due to their low computational efficiency.,contrasting test_22,"Previous work has demonstrated that neural encoders capture a rich hierarchy of syntactic and semantic information (Jawahar et al., 2019;Clark et al., 2019).","reasoning capability and commonsense knowledge are not captured sufficiently (Young et al., 2018).",contrasting test_23,"BERT-MC and RoBERTa-MC obtain similar results with BERT and RoBERTa, respectively.","even RoBERTa is far behind human performance 23 points on R@1, indicating that MuTual is indeed a challenging dataset, which opens the door for tackling new and complex reasoning problems in multi-turn conversations.",contrasting test_24,"The language score is evaluated individually, without considering the discourse coherence.","a reasonable response should establish links in meaning with context, which is also an important aspect of humanlike responses.",contrasting test_25,"In practice, the solver can always find a solution to linearize the subtrees with the constraints.","it sometimes cannot find any solution to directly linearize the full tree within the time limit (1-10% of the cases depending on the treebank), because there are more nodes and more constraints in the full tree.",contrasting test_26,"However, these approaches only consider sentence-level QG.",our work focus on the challenge of generating deep questions with multi-hop reasoning over document-level contexts.,contrasting test_27,"Among them, ""Semantic Error"", ""Redundant"", and ""Unanswerable"" are noticeable errors for all models.",we find that baselines have more unreasonable subject-predicate-object collocations (semantic errors) than our model.,contrasting test_28,Extracting relational triples from unstructured text is crucial for large-scale knowledge graph construction.,few existing works excel in solving the overlapping triple problem where multiple relational triples in the same sentence share the same entities.,contrasting test_29,"To reduce manual work, recent studies have investigated neural network-based methods, which deliver state-of-the-art performance.","most existing neural models like (Miwa and Bansal, 2016) achieve joint learning of entities and relations only through parameter sharing but not joint decoding.",contrasting test_30,"Therefore, the object tagger for relation ""Work in"" will not identify the span of ""Washington"", i.e., the output of both start and end position are all zeros as shown in Figure 2.","the relation ""Birth place"" holds between ""Jackie R. Brown"" and ""Washington"", so the corresponding object tagger outputs the span of the candidate object ""Washington"".",contrasting test_31,"Such inconsistent data distribution of two datasets leads to a comparatively better performance on NYT and a worse performance on WebNLG for all the baselines, exposing their drawbacks in extracting overlapping relational triples.","the CASREL model and its variants (i.e., CASREL random and CAS-REL LST M ) all achieve a stable and competitive performance on both NYT and WebNLG datasets, demonstrating the effectiveness of the proposed framework in solving the overlapping problem.",contrasting test_32,"In other words, it implies that identifying relations is somehow easier than identifying entities for our model.","to NYT, for WebNLG, the performance gap between (E1, E2) and (E1, R, E2) is comparatively larger than that between (E1, R, E2) and (E1, R)/(R, E2).",contrasting test_33,"We also find that there is only a trivial gap between the F1-score on (E1, E2) and (E1, R, E2), but an obvious gap between (E1, R, E2) and (E1, R)/(R, E2).",it reveals that most relations for the entity pairs in extracted triples are correctly identified while some extracted entities fail to form a valid relational triple,contrasting test_34,"Information Extraction (IE), and specifically Relation Extraction (RE), can improve the access to central information for downstream tasks (Santos et al., 2015;Zeng et al., 2014;Jiang et al., 2016;Miwa and Bansal, 2016;Luan et al., 2018a).","the focus of current RE systems and datasets is either too narrow, i.e., a handful of semantic relations, such as 'USED-FOR' and 'SYNONYMY', or too broad, i.e., an unbounded number of generic relations extracted from large, heterogeneous corpora (Niklaus et al., 2018), referred to as Open IE (OIE) (Etzioni et al., 2005;Banko et al., 2007).",contrasting test_35,"It has been shown that scientific texts contain many unique relation types and, therefore, it is not feasible to create separate narrow IE classifiers for these (Groth et al., 2018).",oIE systems are primarily developed for the Web and news-wire domain and have been shown to perform poorly on scientific texts.,contrasting test_36,Lei et al. (2017) conduct word pair interaction score to capture both linear and quadratic relation for argument representation.,these methods utilize the pre-trained embeddings for mining the interaction features and ignore the geometric structure information entailed in discourse arguments and their relation.,contrasting test_37,Xu et al. (2019) propose a topic tensor network (TTN) to model the sentence-level interactions and topic-level relevance among arguments for this task.,few studies model discourse relations by translating them in the low-dimensional embedding space as we do in this work.,contrasting test_38,"With the increasing of the number of encoder layers, the model could capture the richer semantic information.","the results imply that with the more encoder layers considered, the model could incur the over-fitting problem due to adding more parameters.",contrasting test_39,"Based on BLEU (Papineni et al., 2002) scores, previous work (Belinkov et al., 2017) suggests that translating into morphologically rich languages, such as Hungarian or Finnish, is harder than translating into morphologically poor ones, such as English.","a major obstacle in the crosslingual comparison of MT systems is that many automatic evaluation metrics, including BLEU and METEOR (Banerjee and Lavie, 2005), are not cross-lingually comparable.",contrasting test_40,"Reliable reference-free evaluation metrics, directly measuring the (semantic) correspondence between the source language text and system translation, would remove the need for human references and allow for unlimited MT evaluations: any monolingual corpus could be used for evaluating MT systems.","the proposals of referencefree MT evaluation metrics have been few and far apart and have required either non-negligible supervision (i.e., human translation quality labels) (Specia et al., 2010) or language-specific preprocessing like semantic parsing (Lo et al., 2014;Lo, 2019), both hindering the wide applicability of the proposed metrics.",contrasting test_41,"Position encoding (PE), an essential part of self-attention networks (SANs), is used to preserve the word order information for natural language processing tasks, generating fixed position indices for input sequences.","in cross-lingual scenarios, e.g., machine translation, the PEs of source and target sentences are modeled independently.",contrasting test_42,The filtering step as performed by Grave et al. (2018) consisted in only keeping the lines exceeding 100 bytes in length.,"considering that Common Crawl is a mutilingual UTF-8 encoded corpus, this 100-byte threshold creates a huge disparity between ASCII and non-ASCII encoded languages.",contrasting test_43,"Considering the discussion above, we believe an interesting follow-up to our experiments would be training the ELMo models for more of the languages included in the OSCAR corpus.","training ELMo is computationally costly, and one way to estimate this cost, as pointed out by Strubell et al. (2019), is by using the training times of each model to compute both power consumption and CO2 emissions.",contrasting test_44,"Even though it would have been interesting to replicate all our experiments and computational cost estimations with state-of-the-art fine-tuning models such as BERT, XLNet, RoBERTa or AL-BERT, we recall that these transformer-based architectures are extremely costly to train, as noted by the BERT authors on the official BERT GitHub repository, 21 and are currently beyond the scope of our computational infrastructure.","we believe that ELMo contextualized word embeddings remain a useful model that still provide an extremely good trade-off between performance to training cost, even setting new state-of-the-art scores in parsing and POS tagging for our five chosen languages, performing even better than the multilingual mBERT model.",contrasting test_45,All neural models achieve a SG score significantly greater than a random baseline (dashed line).,"the range within neural models is notable, with the bestperforming model (GPT-2-XL) scoring over twice as high as the worst-performing model (LSTM).",contrasting test_46,The TTG ap-proach cannot make use at run-time of an En QE model without translating the caption back to English and thus again requiring perfect translation in order not to ruin the predicted quality score.,"the PLuGS approach appears to be best suited for leveraging an existing En QE model, due to the availability of the generated bilingual output that tends to maintain consistency between the generated EN-& X-language outputs, with respect to accuracy; therefore, directly applying an English QE model appears to be the most appropriate scalable solution.",contrasting test_47,"They read one sentence at a time and provided a suspense judgement using the fivepoint scale consisting of Big Decrease in suspense (1% of the cases), Decrease (11%), Same (50%), Increase (31%), and Big Increase (7%).","to prior work (Delatorre et al., 2018), a relative rather than absolute scale was used.",contrasting test_48,One of our main hypotheses is that modeling edit sequences is better suited for this task than generating comments from scratch.,"a counter argument could be that a comment generation model could be trained from substantially more data, since it is much easier to obtain parallel data in the form (method, comment), without the constraints of simultaneous code/comment edits.",contrasting test_49,Most of these approaches focus on code summarization or comment generation which only require single code-NL pairs for training and evaluation as the task entails generating a natural language summary of a given code snippet.,our proposed task requires two code-NL pairs that are assumed to hold specific parallel relationships with one another.,contrasting test_50,FACTEDITOR has a larger number of correct editing (CQT) than ENCDECEDITOR for fact-based text editing.,eNCDeCeDITOR often includes a larger number of unnecessary rephrasings (UPARA) than FACTeDITOR.,contrasting test_51,ENCDECEDITOR often generates the same facts multiple times (RPT) or facts in different relations (DREL).,fACTE-DITOR can seldomly make such errors.,contrasting test_52,ENCDECEDITOR cannot effectively eliminate the description about an unsupported fact (in orange) appearing in the draft text.,fACTEDI-TOR can deal with the problem well.,contrasting test_53,"Gehrmann et al. (2018) utilized a data-efficient content selector, by aligning source and target, to restrict the model’s attention to likely-to-copy phrases.",we use the content selector to find domain knowledge alignment between source and target.,contrasting test_54,"Such data-driven approaches achieve good performance on several benchmarks like E2E challenge (Novikova et al., 2017), WebNLG challenge (Gardent et al., 2017) and WIKIBIO (Lebret et al., 2016).",they rely on massive amount of training data.,contrasting test_55,"Ma et al. (2019) propose low-resource table-to-text generation with 1,000 paired examples and large-scale target-side examples.","in our setting, only tens to hundreds of paired training examples are required, meanwhile without the need for any target examples.",contrasting test_56,This spillover is potentially sensitive only to Levenshtein distance.,confusability is sensitive to fine-grained perceptual structure.,contrasting test_57,"For this case, IG, IC, and subjectivity all have overlapping confidence intervals, so we conclude that there is no evidence that one is better than the other.",we do have evidence that IG and IC are more accurate than PMI when estimated based on clusters.,contrasting test_58,"The idea of adding the NOTA option to a candidate set is also widely used in other language technology fields like speaker verification (Pathak and Raj, 2013).",the effect of adding NOTA is rarely introduced in dialog retrieval research problems.,contrasting test_59,"D-GPT gets the wrong answer 18% of the time (option a and c), because the input answer predicted by the CoQA baseline is also incorrect 17% of the time.","with oracle answers, it is able to generate correct responses 77% of the times (option e).",contrasting test_60,"Question Answering Using crowd-sourcing methods to create QA datasets (Rajpurkar et al., 2016;Bajaj et al., 2016;Rajpurkar et al., 2018), conversational datasets (Dinan et al., 2018), and ConvQA datasets (Choi et al., 2018;Reddy et al., 2019;Elgohary et al., 2018;Saha et al., 2018) has largely driven recent methodological advances.","models trained on these ConvQA datasets typically select exact answer spans instead of generating them (Yatskar, 2019b).",contrasting test_61,"As shown in Table 1, the hidden dimension of each building block is only 128.",we introduce two linear transformations for each building block to adjust its input and output dimensions to 512.,contrasting test_62,One may either only use the bottlenecks for MobileBERT (correspondingly the teacher becomes BERT LARGE ) or only the invertedbottlenecks for IB-BERT (then there is no bottleneck in MobileBERT) to align their feature maps.,"when using both of them, we can allow IB-BERT LARGE to preserve the performance of BERT LARGE while having MobileBERT sufficiently compact.",contrasting test_63,This has enabled ever-increasing performance on benchmark data sets.,one thing has remained relatively constant: the softmax of a dot product as the output layer.,contrasting test_64,Recently Graph Neural Network (GNN) has shown to be powerful in successfully tackling many tasks.,there has been no attempt to exploit GNN to create taxonomies.,contrasting test_65,We get better performance if we tune the thresholds.,we chose a harder task and proved our model has better performance than others even we simply use 0.5 as the threshold.,contrasting test_66,"The model identifies words related to community (""kids,"" ""neighborhood,"" ""we"") as strong negative signals for depression, supporting that depressed language reflects detachment from community.",the model only focuses on these semantic themes in responses to generic backchannel categories.,contrasting test_67,"Thanks to the increased complexity of deep neural networks and use of knowledge transfer from the language models pretrained on large-scale corpora (Peters et al., 2018; Devlin et al., 2019; Dong et al., 2019), the state of-the-art QA models have achieved human-level performance on several benchmark datasets (Rajpurkar et al., 2016, 2018)","what is also crucial to the success of the recent data-driven models, is the availability of large-scale QA datasets",contrasting test_68,"Some of the recent works resort to semi-supervised learning, by leveraging large amount of unlabeled text (e.g. Wikipedia) to generate synthetic QA pairs with the help of QG systems (Tang et al., 2017; Yang et al., 2017; Tang et al., 2018; Sachan and Xing, 2018).","existing QG systems have overlooked an important point that generating QA pairs from a context consisting of unstructured texts, is essentially a one-to-many problem.",contrasting test_69,"They should be semantically consistent, such that it is possible to predict the answer given the question and the context.","neural QG or QAG models often generate questions irrelevant to the context and the answer (Zhang and Bansal, 2019) due to the lack of the mechanism enforcing this consistency.",contrasting test_70,"For example, when both models are trained with 1% of the Yelp dataset, the accuracy gap is around 9%.","as we increases the amount of training data to 90%, the accuracy gap drops to within 2%.",contrasting test_71,"Upon further investigation, we find that experiments which use probabilities with image based features have an inter-quartile range of 0.05 and 0.1 for EBG and BLOG respectively whereas for experiments using probabilities with binning based features, this range is 0.32 for both datasets.","inter-quartile range for exper-iments using ranks with image based features is 0.08 and 0.05 for EBG and BLOG whereas for experiments using ranks with binning based features, this range is 0.49 and 0.42 respectively.",contrasting test_72,"If we have a large number of training samples, the architecture is capable of learning how to discriminate correctly between classes only with the original training data.","in less-resourced scenarios, our proposed approaches with external knowledge integration could achieve a high positive impact.",contrasting test_73,"These offer obvious benefits to users in terms of immediacy, interaction and convenience.",it remains challenging for application providers to assess language content collected through these means.,contrasting test_74,We make our code publicly available for others to use for benchmarking and replication experiments.,"to feature-based scoring, we instead train neural networks on ASR transcriptions which are labeled with proficiency scores assigned by human examiners, and guide the networks with objectives that prioritize language understanding.",contrasting test_75,"These results were not significantly better than the single-task POS prediction model, though we did not explore tuning the alpha weighting values for the combination models.",bERT only receives a significant improvement in grading ability when using the L1 prediction task.,contrasting test_76,"Figure 4 shows, as expected, that training a speech grader with data from an ASR system with lower word error rates produces better results.",it is interesting to note that this holds true even when evaluating with data from inferior ASR systems.,contrasting test_77,"This is because SciBERT, like other pretrained language models, is trained via language modeling objectives, which only predict words or sentences given their in-document, nearby textual context.","we propose to incorporate citations into the model as a signal of inter-document relatedness, while still leveraging the model's existing strength in modeling language.",contrasting test_78,The candidate program should adhere to the grammatical specification of the target language.,"since incorporating the complete set of C++ grammatical constraints would require significant engineering effort, we instead restrict our attention to the set of ""primary expressions"" consisting of high-level control structures such as if, else, for loops, function declarations, etc.",contrasting test_79,"For example, when there is only one statement within an if statement, the programmer can optionally include a curly brace.",the pseudocode does not contain such detailed information about style.,contrasting test_80,This error can be ruled out by SymTable constraint if variable A is undeclared.,symTable constraints do not preclude all errors related to declarations.,contrasting test_81,Prior works regarded SQG as a dialog generation task and recurrently produced each question.,they suffered from problems caused by error cascades and could only capture limited context dependencies.,contrasting test_82,We expect that this category is rare because the premise is not text.,"since there are some textual elements in the tables, the hypothesis could paraphrase them.",contrasting test_83,"Finally, research into multimodal or multi-view deep learning (Ngiam et al., 2011; Li et al., 2018) offers insights to effectively combine multiple data modalities or views on the same learning problem.","most work does not directly apply to our problem: i) the audio-text modality is significantly under-represented, ii) the models are typically not required to work online, and iii) most tasks are cast as document-level classification and not sequence labeling (Zadeh et al., 2018).",contrasting test_84,Current ASR approaches rely solely on utilizing audio input to produce transcriptions.,the wide availability of cameras in smartphones and home devices acts as motivation to build AV-ASR models that rely on and benefit from multimodal input.,contrasting test_85,"Even for datasets with dialogue taking place in a similar domain as improv, they naturally contain only a small proportion of yes-ands.",the relatively large sizes of these datasets still make them useful for dialogue systems.,contrasting test_86,Their model uses an exemplar sentence as a syntactic guide during generation; the generated paraphrase is trained to incorporate the semantics of the input sentence while emulating the syntactic structure of the exemplar (see Appendix D for examples).,their proposed approach depends on the availability of such exemplars at test time; they manually constructed these for their test set (800 examples).,contrasting test_87,"Recent work on controlled generation aims at controlling attributes such as sentiment (Shen et al., 2017), gender or political slant (Prabhumoye et al., 2018), topic (Wang et al., 2017), etc.",these methods cannot achieve fine-grained control over a property like syntax.,contrasting test_88,"In the context of translation, there is often a canonical reordering that should be applied to align better with the target language; for instance, head-final languages like Japanese exhibit highly regular syntax-governed reorderings compared to English.","in diverse paraphrase generation, there doesn't exist a single canonical reordering, making our problem quite different.",contrasting test_89,"Still, dialogue research papers tend to report scores based on word-overlap metrics from the machine translation literature (e.g. BLEU (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014)).","word-overlap metrics aggressively penalize the generated response based on lexical differences with the ground truth and correlate poorly to human judgements (Liu et al., 2016).",contrasting test_90,"Lowe et al. (2017) propose a learned referenced metric named ADEM, which learns an alignment score between context and response to predict human score annotations.","since the score is trained to mimic human judgements, it requires collecting large-scale human annotations on the dataset in question and cannot be easily applicable to new datasets (Lowe, 2019).",contrasting test_91,"Recently, Tao et al. (2017) proposed a hybrid referenced-unreferenced metric named RUBER, where the metric is trained without requiring human responses by bootstrapping negative samples directly from the dataset.","referenced metrics (including RUBER, as it is part referenced) are not feasible for evaluation of dialogue models in an online setting-when the model is pitched against a human agent (model-human) or a model agent (model-model)-due to lack of a reference response.",contrasting test_92,All models achieve high scores on the semantic positive samples when only trained with syntactical adversaries.,training only with syntactical negative samples results in adverse effect on detecting semantic negative items.,contrasting test_93,"It has shown that response timings vary based on the semantic content of dialogue responses and the preceding turn (Levinson and Torreira, 2015), and that listeners are sensitive to these fluctuations in timing (Bogels and Levinson, 2017).",the question of whether certain response timings within different contexts are considered more realistic than others has not been fully investigated.,contrasting test_94,"While Step-By-Step uses heuristic string matching to extract plans from the referenced sentences, other methods (GRU and transformer), as well as ours, use plans provided in the enriched WebNLG dataset (Castro Ferreira et al., 2018).",step-By-step reported worse BLEU results on these plans.,contrasting test_95,"GCN does not perform well on Coverage, which demonstrates that the structural gap between encoding and decoding indeed makes generation more difficult.","it has the smallest difference between Coverage and Faithfulness among all the baselines, indicating that the fidelity of generation can benefit from the encoding of graph-level structural information.",contrasting test_96,These existing neural models have achieved encouraging results.,"when a new condition is added (e.g., a new topic for categorical generation), they require a full retraining or finetuning.",contrasting test_97,"Many great works have attempted to solve various subtasks like dialogue generation (Li et al., 2016), poetry generation (Yi et al., 2018) and story generation (Fan et al., 2018) and new techniques keep emerging (Bowman et al., 2016;Yu et al., 2017;Zhou et al., 2020).","due to the blackbox nature of neural networks, the recent proposed generic models suffer the problem of lacking interpretability and controllability.",contrasting test_98,"Previous methods (Kingma et al., 2014; Hu et al., 2017) learn the joint conditional space by jointly considering all conditions.","once the model is trained, it is not possible to add a new condition without a full retraining.",contrasting test_99,These methods collect user feedback after the model-predicting stage and treat user feedback as additional offline training data to improve the model.,our model leverages user interaction to increase prediction performance.,contrasting test_100,"Pretrained autoregressive models such as GPT (Radford et al., 2018, 2019) are especially capable of generating fluent and coherent text that highly resembles human-written text",unidirectional attention brings two limitations.,contrasting test_101,This can result in different computational complexity.,"since a typical Graphics Processing Unit (GPU) computes matrices in parallel, the actual difference in inference time is not that significant.",contrasting test_102,"The Wikitext103 dataset is more similar to the pretraining datasets, containing long articles.","the One-Billion Words dataset contains only single sentences, roughly half of which contain less than 24 tokens.",contrasting test_103,"Besides, the results show that there are few differences between relative positional embedding and absolute positional embedding for u-PMLM.","although BERT supports generation in arbitrary word order as well, the PPL for BERT is significantly worse than our proposed u-PMLM for both ""sequential"" and ""random"" settings, demonstrating the effectiveness of the proposed probabilistic masking scheme.",contrasting test_104,We show more cases of text generation in random order for u-PMLM-A and BERT in Appendix B.,"for PPL on One-Billion Words, the performances of u-PMLM and BERT are not satisfactory in comparison with GPT.",contrasting test_105,"For GPT, the input text can only be placed in the beginning and the generation process become uncontrollable, resulting in generating sentences with topic drift.",u-PMLM allows manually placing anchor sentences in the middle or end of the generated text to guide the topic of the generated text.,contrasting test_106,"Existing uses of pretrained MLMs in sequenceto-sequence models for automatic speech recognition (ASR) or neural machine translation (NMT) involve integrating their weights (Clinchant et al., 2019) or representations (Zhu et al., 2020) into the encoder and/or decoder during training.","we train a sequence model independently, then rescore its n-best outputs with an existing MLM.",contrasting test_107,"As the MLM gets stronger, the improvement from adding scores from GPT-2 goes to zero, suggesting that their roles overlap at the limit.","unlike recent work (Shin et al., 2019) but like previous work (Chen et al., 2017), we found that interpolating with a unidirectional LM remained optimal, though our models are trained on different datasets and may have an ensembling effect.",contrasting test_108,"In the IID setting, large pretrained Transformer models can attain near human-level performance on numerous tasks (Wang et al., 2019)","high IID accuracy does not necessarily translate to OOD robustness for image classifiers (Hendrycks and Dietterich, 2019), and pretrained Transformers may embody this same fragility.",contrasting test_109,"The recent work of Pruthi et al. (2019), which uses a typo-corrector to defend against adversarial typos, is such a reusable defense: it is trained once, then reused across different tasks.","we find that current typo-correctors do not perform well against even heuristic attacks, limiting their applicability.",contrasting test_110,"During training, each occurrence of ""at"" and ""abet"" is replaced with z.","since ""at"" is much more frequent, classifiers treat z similarly to ""at in order to achieve good overall performance.",contrasting test_111,This data consists of approximately 2 million instances constructed using the abstract and body structure of Wikipedia.,our ap-proach to pre-training can generate data in unlimited quantity from any text source without assuming a particular document structure.,contrasting test_112,"Both improve with multiple iterations, though the improvement is much larger with CMLM.","even with 10 iterations, ENGINE is comparable to CMLM on DE-EN and outperforms it on RO-EN",contrasting test_113,"As in EWISE, in EWISER logits are computed by a dot product between a matrix of hidden scores and output synset embeddings.","we do not train our own synset embeddings: rather, we employ off-the-shelf vectors.",contrasting test_114,"Consequently, the general-language and domain-specific contexts are maximally similar in these cases.","we assume that the contexts will vary more strongly for basic terms, and for non-terms we do not expect to find domain-specific sentences in the generallanguage corpus at all.",contrasting test_115,"Recent research on fairness has primarily focused on racial and gender biases within distributed word representations (Bolukbasi et al., 2016), coreference resolution (Rudinger et al., 2018), sentence encoders (May et al., 2019), and language models .","we posit that there exists a significant potential for linguistic bias that has yet to be investigated, which is the motivation for our work.",contrasting test_116,"In the context of question answering, SpanBERT appears to be slightly more robust than vanilla BERT when comparing overall performance on the two SQuAD datasets.",the difference becomes significant if we look only at the SQuAD 2.0-fine-tuned models' performance on answerable questions (7% difference).,contrasting test_117,"Existing adversarial training approaches have shown that retraining the model on the augmented training set improves robustness (Belinkov and Bisk, 2018; Eger et al., 2019; Jin et al., 2019).",this requires substantial compute resources.,contrasting test_118,"As depicted in Figure 1, we are interested in identifying clusters of subtly distinctive glyph shapes as these correspond to distinct metal stamps in the type-cases used by printers.","other sources of variation (inking, for example, as depicted in Figure 1) are likely to dominate conventional clustering methods.",contrasting test_119,"For example, Gulcehre et al. (2014) explored the influence of selecting different pooling norms on the performance of different image classification tasks.","the norms in their method are manually tuned, which are usually very time-consuming and may not be optimal.",contrasting test_120,Multi-task learning (MTL) and transfer learning (TL) are techniques to overcome the issue of data scarcity when training state-of-theart neural networks.,finding beneficial auxiliary datasets for MTL or TL is a time-and resource-consuming trial-and-error approach.,contrasting test_121,"This is reasonable as the ""O"" labels by far make up the majority of all labels in NER datasets.","this does not help to find similar dataset in other cases, because there is no meaningful ordering of the entropy values when comparing any of the POS samples with all the other samples.",contrasting test_122,"The incorporation of pseudo-tags is a standard technique widely used in the NLP community, (Rico et al., 2016; Melvin et al., 2017).","to the best of our knowledge, our approach is the first attempt to incorporate pseudo-tags as an identification marker of virtual models within a single model.",contrasting test_123,"As displayed in Table 2, SINGLEENS surpassed SINGLE by 0.44 and 0.14 on CoNLL-2003 and CoNLL-2000, respectively, for TFM:ELMO with the same parameter size.",nORMALEnS produced the best results in this setting.,contrasting test_124,Multi-modal neural machine translation (NMT) aims to translate source sentences into a target language paired with images.,"dominant multi-modal NMT models do not fully exploit fine-grained semantic correspondences between semantic units of different modalities, which have potential to refine multi-modal representation learning.",contrasting test_125,Non-autoregressive neural machine translation (NAT) predicts the entire target sequence simultaneously and significantly accelerates inference process.,"nAT discards the dependency information in a sentence, and thus inevitably suffers from the multi-modality problem: the target tokens may be provided by different possible translations, often causing token repetitions or missing.",contrasting test_126,"Therefore, the decoder has richer target-side information to detect and recover from such errors.",it is non-trivial to train the model to learn such behaviour while maintaining a reasonable speedup.,contrasting test_127,"The study of calibration on classification tasks has a long history, from statistical machine learning (Platt et al., 1999;Niculescu-Mizil and Caruana, 2005) to deep learning (Guo et al., 2017).",calibration on structured generation tasks such as neural machine translation (NMT) has not been well studied.,contrasting test_128,"Modern neural networks have been found to be miscalibrated on classification tasks in the direction of overestimation (Guo et al., 2017).",nMT models also suffer from under-estimation problems.,contrasting test_129,"For instance, if the spam candidate mutates at the critical positions, the label of the augmented text is likely to change.",normal candidates are less likely to be affected by this situation.,contrasting test_130,"Since manual coding is very laborious and prone to errors, many methods have been proposed for the automatic ICD coding task.","most of existing methods independently predict each code, ignoring two important characteristics: Code Hierarchy and Code Co-occurrence.",contrasting test_131,"(2) The effectiveness of hyperbolic representations: Our proposed model and the CNN+Attention can both correctly predict the code ""518.81"".",the CNN+Attention model gives contradictory predictions.,contrasting test_132,"For example, label ""movie"" should have the largest scalar projection onto a document about ""movie"".","even the learned label representation of ""music"" can be distinguished from ""movie"", it may also have a large scalar projection onto the document.",contrasting test_133,"(Zhao et al., 2019) improves the scalability of capsule networks for MLC.","they only use CNN to construct capsules, which capture local contextual information (Wang et al., 2016).",contrasting test_134,The typical MLC method SLEEC takes advantage of label correlations by embedding the label co-occurrence graph.,"sLEEC uses TF-IDF vectors to represent documents, thus word order is also ignored.",contrasting test_135,REGGNN is generally superior to both of them as it combines the local and global contextual information dynamically and takes label correlations into consideration using a regularized loss.,"the two capsulebased methods NLP-CAP and HYPERCAPS consistently outperform all the other methods owing to dynamic routing, which aggregates the fine-grained capsule features in a label-aware manner.",contrasting test_136,"In the hospitals, the doctors will make a comprehensive analysis mainly based on CC, HPI, PE, TR and the basic information, and make a diagnosis.",it is very hard for computers to automatically understand all the diverse sections and capture the key information before making an appropriate diagnosis.,contrasting test_137,Zhang et al. (2017) combines the variational auto-encoder and the variational recurrent neural network together to make diagnosis based on laboratory test data.,laboratory test data are not the only resources considered in this paper.,contrasting test_138,"Although ECNN also outputs a probability distribution over all diseases, the result is not interpretable due to its end-to-end nature.",the interpretability is very important in the CDS to explain how the diagnosis is generated by machines.,contrasting test_139,Remarkable success has been achieved when sufficient labeled training data is available.,"annotating sufficient data is labor-intensive, which establishes significant barriers for generalizing the stance classifier to the data with new targets.",contrasting test_140,The aspect-opinion pairs can provide a global profile about a product or service for consumers and opinion mining systems.,traditional methods can not directly output aspect-opinion pairs without given aspect terms or opinion terms.,contrasting test_141,"As the example sentence shown in Figure 1, (service, great), (prices, great) and (atmosphere, nice friendly) are three aspect-opinion pairs.","the co-extraction methods can only output the AT set {service, prices, atmosphere} and the OT set {great, nice friendly} jointly.",contrasting test_142,"Most of the previous AT and OT extraction methods formulate the task as a sequence tagging problem (Wang et al., 2016, 2017; Wang and Pan, 2018; Yu et al., 2019), specifically using a 5-class tag set: {BA (beginning of aspect), IA (inside of aspect), BP (beginning of opinion), IP (inside of opinion), O (others)}.","the sequence tagging methods suffer from a huge search space due to the compositionality of labels for extractive ABSA tasks, which has been proven in (Lee et al., 2017b;Hu et al., 2019).",contrasting test_143,"Motivated by the correlations between the two tasks, SRL has been utilized to help the ORL task by many previous studies (Ruppenhofer et al., 2008; Marasovic and Frank, 2018; Zhang et al., 2019b).","when opinion expressions and arguments compose complicated syntactic structures, it is difficult to correctly recognize the opinion arguments even with shallow semantic representation like SRL (Marasovic and Frank, 2018).",contrasting test_144,"Specifically, the pipeline way first trains the dependency parser and then fixes the parser components during training the ORL model.",the MTL way trains both the parser and the ORL model at the same time.,contrasting test_145,"As a baseline, Figure 2-(c) shows the most common MTL method, which shares a common encoder and uses multiple task-specific output layers, known as the hard-parameter-sharing MTL (Ruder, 2017; Marasovic and Frank, 2018).","this approach is not suitable for our scenario where the auxiliary parsing task has much more labeled data than the main ORL task, since the shared encoder is very likely to bias toward to parsing performance (Xia et al., 2019a).",contrasting test_146,"For example, Stab and Gurevych (2017) introduced Argument Annotated Essays (hereafter, Essay), and researchers attempted to predict tree arguments in the corpus (Eger et al., 2017;Potash et al., 2017;Kuribayashi et al., 2019).",these techniques lack the capability of dealing with more flexible arguments such as reason edges where a proposition can have several parents.,contrasting test_147,Potash et al. (2017) developed a pointer network architecture to predict edges.,we cannot simply utilize them for non-tree arguments because these models were built upon the assumption that an argument forms a tree structure.,contrasting test_148,"However, if one wants to apply WSD to some specific corpus, additional annotated training data might be required to meet the similar performance as ours, which defeats the purpose of a weakly supervised setting.","our contextualization, building upon (Devlin et al., 2019), is adaptive to the input corpus, without requiring any additional human annotations.",contrasting test_149,"In micro average, all the span predictions are aggregated together and then compared with the gold spans to get the precision and recall.",macro average is obtained by calculating the F1 score for each individual sentence and then take an average over all the sentences.,contrasting test_150,Multi-threading is employed since sentences are mutually independent.,we find that using more than 4 threads does not further improve the speed.,contrasting test_151,We can see that the performance gap is quite steady when we gradually reduce the number of training sentences.,the gap clearly becomes larger when each training sentence has less annotated dependencies.,contrasting test_152,"In BiLSTM-CRF, the CRF layer models the relation between neighbouring labels which leads to better results than simply predicting each label separately based on the BiLSTM outputs.","the CRF structure models the label sequence globally with the correlations between neighboring labels, which increases the difficulty in distilling the knowledge from the teacher models.",contrasting test_153,"In particular, discarding the conversion matrix in the ESD module also leads to the performance drop, which indicates the usefulness of capturing the label correspondence between the auxiliary module and our main MNER task.","as the main contribution of our MMI module, Image-Aware Word Representations (WR) demonstrates its indispensable role in the final performance due to the moderate performance drop after removal.",contrasting test_154,"These neural approaches have been shown to achieve the state-of-the-art performance on different benchmark datasets based on formal text (Yang et al., 2018).","when applying these approaches to social media text, most of them fail to achieve satisfactory results.",contrasting test_155,"Despite not being exposed to explicit syntactic supervision, neural language models (LMs), such as recurrent neural networks, are able to generate fluent and natural sentences, suggesting that they induce syntactic knowledge about the language to some extent.",it is still under debate whether such induced knowledge about grammar is robust enough to deal with syntactically challenging constructions such as long-distance subjectverb agreement.,contrasting test_156,"We expect that the main reason for lower performance for object RCs is due to frequency, and with our augmentation the accuracy will reach the same level as that for subject RCs.","for both all and animate cases, accuracies are below those for subject rCs (Figure 2).",contrasting test_157,"Moreover, Huang et al. (2019) improve TextGCN by introducing the message passing mechanism and reducing the memory consumption",there are two major drawbacks in these graph-based methods.,contrasting test_158,"A concurrent work (Warstadt et al., 2019b) facilitates diagnosing language models by creating linguistic minimal pairs datasets for 67 isolate grammatical paradigms in English using linguistcrafted templates.",we do not rely heavily on artificial vocabulary and templates.,contrasting test_159,Such attention weights measure the relative importance of the token within a specific input sequence.,the attention score a j captures the absolute importance of the token.,contrasting test_160,"After that, both of them directly use the word representation of two languages to retrieve the initial bilingual lexicons by computing the cosine distances of source and target word representations.",directly finding word alignments from scratch has some demerits.,contrasting test_161,Unsupervised neural machine translation (UNMT) has recently achieved remarkable results for several language pairs.,it can only translate between a single language pair and cannot produce translation results for multiple language pairs at the same time.,contrasting test_162,"For example, Xu et al. (2019) and Sen et al. (2019) proposed a multilingual scheme that jointly trains multiple languages with multiple decoders.","the performance of their MUNMT is much worse than our re-implemented individual baselines (shown in Tables 2 and 3) and the scale of their study is modest (i.e., 4-5 languages).",contrasting test_163,"Moreover, the MUNMT model could alleviate the poor performance achieved with low-resource language pairs, such as En-Lt and En-Lv.",the performance of MUNMT is slightly worse than SM in some language pairs.,contrasting test_164,"The standard training algorithm in neural machine translation (NMT) suffers from exposure bias, and alternative algorithms have been proposed to mitigate this.",the practical impact of exposure bias is under debate.,contrasting test_165,"Previous work has sought to reduce exposure bias in training (Bengio et al., 2015; Ranzato et al., 2016; Shen et al., 2016; Wiseman and Rush, 2016; Zhang et al., 2019).","the relevance of error propagation is under debate: Wu et al. (2018) argue that its role is overstated in literature, and that linguistic features explain some of the accuracy drop at higher time steps.",contrasting test_166,Advanced pre-trained models for text representation have achieved state-of-the-art performance on various text classification tasks.,"the discrepancy between the semantic similarity of texts and labelling standards affects classifiers, i.e. leading to lower performance in cases where classifiers should assign different labels to semantically similar texts.",contrasting test_167,"In general, AAN achieved greater performance than AM.",their effectiveness turned out to be task-dependent.,contrasting test_168,"Previous studies aimed to improve multiple tasks; hence, they required multiple sets of annotated datasets.",our method does not require any extra labelled datasets and is easily applicable to various classification tasks.,contrasting test_169,"On top of it, Crosslingual training (or bilingual, denoted by ""Cross"") obtains marginal improvements for moderately lowresource languages.","the performance drops dramatically for two extremely low-resource languages, i.e., JA from 0.740 to 0.711 and EL from 0.702 to 0.684.",contrasting test_170,"Previous studies in multimodal sentiment analysis have used limited datasets, which only contain unified multimodal annotations.",the unified annotations do not always reflect the independent sentiment of single modalities and limit the model to capture the difference between modalities.,contrasting test_171,"An intuitive idea is that the greater the difference between inter-modal representations, the better the complementarity of intermodal fusion.","it is not easy for existing late-fusion models to learn the differences between different modalities, further limits the performance of fusion.",contrasting test_172,"The CHEAVD (Li et al., 2017) is also a Chinese multimodal dataset, but it only contains two modalities (vision and audio) and one unified annotation.",sIMs has three modalities and unimodal annotations except for multimodal annotations for each clip.,contrasting test_173,"However, these existing multimodal datasets only contain a unified multimodal annotation for each multimodal corpus.",sIMs contains both unimodal and multimodal annotations.,contrasting test_174,Hierarchical FM performs better than MLP+CNN by incorporating additional attributes that provide the visual semantic information and generating better feature representations via a hierarchical fusion framework.,these multimodal baselines pay more attention to the fusion of multimodal features.,contrasting test_175,"Thus, their performances are worse than MIARN, which focuses on textual context to model the contrast information between individual words and phrases.","due to the nature of short text, relying on textual information is often insufficient, especially in multimodal tweets where cross-modality context relies the most important role.",contrasting test_176,"As interpretability is important for understanding and debugging the translation process and particularly to further improve NMT models, many efforts have been devoted to explanation methods for NMT (Ding et al., 2017;Alvarez-Melis and Jaakkola, 2017;Li et al., 2019;Ding et al., 2019;.",little progress has been made on evaluation metric to study how good these explanation methods are and which method is better than others for NMT.,contrasting test_177,"In terms of i), Word Alignment Error Rate (AER) can be used as a metric to evaluate an explanation method by measuring agreement between human-annotated word alignment and that derived from the explanation method.",aER can not measure explanation methods on those target words that are not aligned to any source words according to human annotation.,contrasting test_178,"On one hand, the real data distribution of c t is unknowable, making it impossible to exactly define the expectation with respect to an unknown distribution.","the domain of a proxy model Q is not bounded, and it is difficult to minimize a model Q within an unbounded domain.",contrasting test_179,"This backpropagation through the generated data, combined with adversarial learning instabilities, has proven to be a compelling challenge when applying GANs for discrete data such as text.",it remains unknown if this is also an issue for feature matching networks since the effectiveness of GFMN for sequential discrete data has not yet been studied.,contrasting test_180,An interesting comparison would be between Se-qGFMN and GANs that use BERT as a pre-trained discriminator.,"gANs fail to train when a very deep network is used as the discriminator Moreover, SeqgFMN also outperforms gAN generators even when shallow word embeddings (glove / FastText) are used to perform feature matching.",contrasting test_181,"Extractive MRC requires a model to extract an answer span to a question from reference documents, such as the tasks in SQuAD (Rajpurkar et al., 2016) and CoQA (Reddy et al., 2019).","non-extractive MRC infers answers based on some evidence in reference documents, including Yes/No question answering (Clark et al., 2019), multiple-choice MRC (Lai et al., 2017; Khashabi et al., 2018; Sun et al., 2019), and open domain question answering (Dhingra et al., 2017b).",contrasting test_182,RL methods can indeed train a better extractor without evidence labels.,"they are much more complicated and unstable to train, and highly dependent on model pre-training.",contrasting test_183,"As the innermost ring shows, about 80% of the evidence predicted by BERT-HA (iter 0) was incorrect.",the proportion of wrong instances reduced to 60% after self-training (iter 3).,contrasting test_184,"Earlier studies have attempted to perform the MWP task via statistical machine learning methods (Kushman et al., 2014; Hosseini et al., 2014; Mitra and Baral, 2016; Roy and Roth, 2018) and semantic parsing approaches (Shi et al., 2015; KoncelKedziorski et al., 2015; Roy and Roth, 2015; Huang et al., 2017).",these methods are nonscalable as tremendous efforts are required to design suitable features and expression templates.,contrasting test_185,"To enrich the representation of a quantity, the relationships between the descriptive words associated with a quantity need to be modeled.","such relationships cannot be effectively modeled using recurrent models, which are commonly used in the existing MWP deep learning methods.",contrasting test_186,"While all of these methods are bag-of-words models, Liu et al. (2019a) recently proposed an architecture based on context2vec (Melamud et al., 2016).","in contrast to our work, they (i) do not incorporate surface-form information and (ii) do not directly access the hidden states of context2vec, but instead simply use its output distribution.",contrasting test_187,"For this reason, previous works used an in-house mapping between BabelNet versions to make them up to date.","in this process, several gold instances were lost making the datasets smaller than the original ones.",contrasting test_188,"Word-Net provides information about sense frequency that is either manually-annotated or derived from SemCor (Miller et al., 1993), i.e., a corpus where words are manually tagged with WordNet meanings.","neither WordNet nor SemCor have been updated in the past 10 years, thus making their information about sense frequency outdated.",contrasting test_189,The method of applying REINFORCE to the discriminative parser is straightforward because sampling trees from the discriminative parser is easy.,that is not the case for the generative model from which we have to sample both trees and sentences at the same time.,contrasting test_190,"In this task, models are expected to make predictions with the semantic information rather than with the demographic group identity information (e.g., ""gay"", ""black"") contained in the sentences.",recent research points out that there widely exist some unintended biases in text classification datasets.,contrasting test_191,"In other words, a non-discrimination model should perform similarly across sentences containing different demographic groups.","""perform similarly"" is indeed hard to define.",contrasting test_192,"Similar to results on Toxicity Comments, we find that both Weight and Supplement perform significantly better than Baseline in terms of IPTTS AUC and FPED, and the results of Weight and Supplement are comparable.","we notice that Weight and Supplement improve FNED slightly, while the differences are not statistically significant at confidence level 0.05.",contrasting test_193,"Current approaches define interpretation in a rather ad-hoc manner, motivated by practical usecases and applications.","this view often fails to distinguish between distinct aspects of the interpretation's quality, such as readability, plausibility and faithfulness (Herman, 2017).",contrasting test_194,"For example, Serrano and Smith (2019) and Jain and Wallace (2019) show that high attention weights need not necessarily correspond to a higher impact on the model's predictions and hence they do not provide a faithful explanation for the model's predictions.",wiegreffe and Pinter (2019) argues that there is still a possibility that attention distributions may provide a plausible explanation for the predictions.,contrasting test_195,Our main goal is to show that our proposed models provide more faithful and plausible explanations for their predictions.,before we go there we need to show that the predictive performance of our models is comparable to that of a vanilla LSTM model and significantly better than non-contextual models.,contrasting test_196,We observe that randomly permuting the attention weights in the Diversity and Orthogonal LSTM model results in significantly different outputs.,there is little change in the vanilla LSTM model's output for several datasets suggesting that the attention weights are not so meaningful.,contrasting test_197,"Several other works (Shao et al., 2019; Martins and Astudillo, 2016; Malaviya et al., 2018; Niculae and Blondel, 2017; Maruf et al., 2019; Peters et al., 2018) focus on improving the interpretability of attention distributions by inducing sparsity.",the extent to which sparse attention distributions actually offer faithful and plausible explanations haven't been studied in detail.,contrasting test_198,"The plotted eight words are gathered together, and it can be seen that hidden representations of the same word gather in the same place regardless of correctness.","fine-tuned BERT produces a vector space that demonstrates correct and incorrect words on different sides, showing that hidden representations take grammatical errors into account when fine-tuned on GEC corpora.",contrasting test_199,"For instance, DKN (Wang et al., 2018) learns knowledge-aware news representation via multi-channel CNN and gets a representation of a user by aggregating her clicked news history with different weights.","these methods (Wu et al., 2019b; Zhu et al., 2019; An et al., 2019) usually focus on news contents, and seldom consider the collaborative signal in the form of high-order connectivity underlying the user-news interactions.",contrasting test_200,"Wang et al. (2019) explored the GNN to capture high-order connectivity information in user-item graph by propagating embeddings on it, which achieves better performance on recommendation.","existing news recommendation methods focus on, and rely heavily on news contents.",contrasting test_201,"Initially, we set z u,k = s u,k .","after obtaining the latent variables {r d,k }, we can find an estimate of z u,k by aggregating information from the clicked news, which is computed as Eq.",contrasting test_202,Most existing methods usually learn the representations of users and news from news contents for recommendation.,they seldom consider highorder connectivity underlying the user-news interactions.,contrasting test_203,"It supposes that comparing to less important roles, the roles with bigger impact are expected to appear at more places and are more evenly distributed over the story.","this assumption ignores actions of roles (denoted as behavioral semantic information), which may be a key factor that estimates their impacts in legalcontext scenarios.",contrasting test_204,Position or frequency information does not effectively reflect the status of a role in such samples.,"our method captures this information by the cooperation mode feature between Yin and Zhao, with the help of verb ""instructed"".",contrasting test_205,"As we can see from Figure 2(a), the plain nets suffer from the degradation problem, which is not caused by overfitting, as they exhibit lower training BLEU.",the 72-layer MSC exhibits higher training BLEU than the 36-layer counterpart and is generalizable to the validation data.,contrasting test_206,"Figure 4A) shows, as first identified by Kozlowski et al. (2019), that much of this is due to the variance of the survey data along that dimension; the correlation between variance and the coefficients in Figure 3 is 0.91","as discussed above, Kozlowski et al. (2019) study more general concepts on more general dimensions, and note that they have no easy way to connect their observations to any critical social processes. ",contrasting test_207,"On the other hand, from the ""bias"" per-spective, this suggests that a vast array of social biases are encoded in embeddings.","we also find that some beliefs-specifically, extreme beliefs on salient dimensions -are easier to measure than others.",contrasting test_208,"Attention mechanisms learn to assign soft weights to (usually contextualized) token representations, and so one can extract highly weighted tokens as rationales.","attention weights do not in general provide faithful explanations for predictions (Jain and Wallace, 2019; Serrano and Smith, 2019; Wiegreffe and Pinter, 2019; Zhong et al., 2019; Pruthi et al., 2020; Brunner et al., 2020; Moradi et al., 2019; Vashishth et al., 2019).",contrasting test_209,"Original rationale annotations were not necessarily comprehensive; we thus collected comprehensive rationales on the final two folds of the original dataset (Pang and Lee, 2004).","to most other datasets, the rationale annotations here are span level as opposed to sentence level.",contrasting test_210,"In general, the rationales we have for tasks are sufficient to make judgments, but not necessarily comprehensive.",for some datasets we have explicitly collected comprehensive rationales for at least a subset of the test set.,contrasting test_211,"Since DeFormer retains much of the original structure, we can initialize this model with the pre-trained weights of the original Transformer and fine-tune directly on downstream tasks.",deFormer looses some information in the representations of the lower layers.,contrasting test_212,The upper layers can learn to compensate for this during finetuning.,we can go further and use the original model behavior as an additional source of supervision.,contrasting test_213,This is an orthogonal approach that can be combined with our decomposition idea.,"for the paired-input tasks we consider, pruning heads only provides limited speedup.",contrasting test_214,This is because the candidate justifications are coming from a relatively small numbers of paragraphs in MultiRC; thus even shorter queries (= 2 words) can retrieve relevant justifications.,"the number of candidate justifications in QASC is much higher, which requires longer queries for disambiguation (>= 4 words).",contrasting test_215,Access to such data can greatly facilitate investigation of phonetic typology at a large scale and across many languages.,"it is nontrivial and computationally intensive to obtain such alignments for hundreds of languages, many of which have few to no resources presently available.",contrasting test_216,"In addition to its coverage, the CMU Wilderness corpus is unique in two additional aspects: cleanly recorded, read speech exists for all languages in the corpus, and the same content (modulo translation) exists across all languages.",this massively multilingual speech corpus is challenging to work with directly.,contrasting test_217,"Since our greedy opportunistic decoding doesn't change the final output, there is no difference in BLEU compared with normal decoding, but the latency is reduced.","by applying beam search, we can achieve 3.1 BLEU improvement and 2.4 latency reduction on wait-7 policy.",contrasting test_218,"Thanks to the wealth of high-quality annotated images available in popular repositories such as ImageNet, multimodal language-vision research is in full bloom.","events, feelings and many other kinds of concepts which can be visually grounded are not well represented in current datasets.",contrasting test_219,"On one hand, the inclusion of NC concepts would be an important step towards wide-coverage image semantic understanding.","it also goes in the same direction as recent multimodal language-vision approaches, e.g., monoand cross-lingual Visual Sense Disambiguation (Barnard and Johnson, 2005; Loeff et al., 2006; Saenko and Darrell, 2008; Gella et al., 2016, 2019).",contrasting test_220,"Our experiments show that both systems are reliable on our task, achieving precision and F1 scores that are over 70% on all the splits (see Table 2).",the F-VLP model proves to be the most stable for the task.,contrasting test_221,This accordance to some extent verifies that the neurons found through influence paths are functionally important.,"the t-values shown in Table 1 show that both neuron 125 and 337 are influential regardless of the subject number, whereas Lakretz et al. assign a subject number for each of these two neurons due to their disparate effect in lowering accuracy in ablation experiment.",contrasting test_222,"A slightly worse NA task performance (Lakretz et al., 2019) in cases of attractors (SP, PS) indicates that they interfere with prediction of the correct verb.","we also observe that helpful nouns (SS, PP) contribute positively to the correct verb number (although they should not from a grammar perspective).",contrasting test_223,"In particular, in the largest setting with N = 1M, the BERT-24 embeddings distilled from the best-performing layer for each dataset drastically outperform both Word2Vec and GloVe.",this can be seen as an unfair comparison given that we are selecting specific layers for specific datasets.,contrasting test_224,"As such, the validity of treating the resulting static embeddings as reliable proxies for the original contextualized model still remains open.","human language processing has often been conjectured to have both context-dependent and context-independent properties (Barsalou, 1982; Rubio-Fernandez, 2008; Depraetere, 2014, 2019).",contrasting test_225,"We find that each of the tested word features can be encoded in contextual embeddings for other words of the sentence, often with perfect or nearperfect recoverability.",we see substantial variation across encoders in how robustly each information type is distributed to which tokens.,contrasting test_226,CheckList provides a framework for such techniques to systematically evaluate these alongside a variety of other capabilities.,"checkList cannot be directly used for non-behavioral issues such as data versioning problems (Amershi et al., 2019), labeling errors, annotator biases (Geva et al., 2019), worst-case security issues (Wallace et al., 2019), or lack of interpretability (Ribeiro et al., 2016).",contrasting test_227,These experimental results show the critical role of triggers in dialogue-based relation extraction.,"trigger identification is perhaps as difficult as relation extraction, and it is labor-intensive to annotate large-scale datasets with triggers.",contrasting test_228,RST Graph is constructed from RST parse trees over EDUs of the document.,coreference Graph connects entities and their coreference clusters/mentions across the document.,contrasting test_229,"As observed in Louis et al. (2010), the RST tree structure already serves as a strong indicator for content selection.",the agreement between rhetorical relations tends to be lower and more ambiguous.,contrasting test_230,BERT is originally trained to encode a single sentence or sentence pair.,"a news article typically contains more than 500 words, hence we need to make some adaptation to apply BERT for document encoding.",contrasting test_231,"Because of the similarity to our task, we use a BERT-based neural network as the architecture for the coverage model.",the coverage task differs from MLM in two ways.,contrasting test_232,"Just as with the Summarizer, by using a standardized architecture and model size, we can make use of pretrained models.","it is important for Fluency to fine tune the language model on the target domain, so that the Summarizer is rewarded for generating text similar to target content.",contrasting test_233,"EMONET was conceived as a multiclass classification task for Plutchik-8 emotions (Abdul-Mageed and Ungar, 2017).","we introduce binary classification tasks, one for each Plutchik-8 emotion.",contrasting test_234,"In our work, we raise similar concerns but through a different angle by highlighting issues with the evaluation procedure used by several recent methods. Chandrahas et al. (2018) analyze the geometry of KG embeddings and its correlation with task performance while Nayyeri et al. (2019) examine the effect of different loss functions on performance.",their analysis is restricted to non-neural approaches.,contrasting test_235,Several recently proposed methods report high performance gains on a particular dataset.,their performance on another dataset is not consistently improved.,contrasting test_236,"Recently many reading comprehension datasets requiring complex and compositional reasoning over text have been introduced, including HotpotQA (Yang et al., 2018), DROP (Dua et al., 2019), Quoref , and ROPES (Lin et al., 2019).","models trained on these datasets (Hu et al., 2019;Andor et al., 2019) only have the final answer as supervision, leaving the model guessing at the correct latent reasoning.",contrasting test_237,Recent proposed approaches have made promising progress in dialogue state tracking (DST).,"in multi-domain scenarios, ellipsis and reference are frequently adopted by users to express values that have been mentioned by slots from other domains.",contrasting test_238,Open vocabulary models show the promising performance in multidomain DST.,ellipsis and reference phenomena among multi-domain slots are still less explored in existing literature.,contrasting test_239,"Some proposed solutions rely on leveraging knowledge distillation in the pre-training step, e.g., (Sanh et al., 2019), or used parameter reduction techniques (Lan et al., 2019) to reduce inference cost.",the effectiveness of these approaches varies depending on the target task they have been applied to.,contrasting test_240,"This may be seen as a generalization of the ST approach, where the student needs to learn a simpler task than the teacher.","our approach is significantly different from the traditional ST setting, which our preliminary investigation showed to be not very effective.",contrasting test_241,"These two dialogue-specific LM approaches, ULM and UOP, give very marginal improvement over the baseline models, that is rather surprising.","they show good improvement when combined with UID, implying that pre-training language models may not be enough to enhance the performance by itself but can be effective when it is coupled with an appropriate fine-tuning approach.",contrasting test_242,"Recently, pretrained language representation models (Kocijan et al., 2019;Radford et al., 2019;Liu et al., 2019) have demonstrated significant improvements in both unsupervised and supervised settings.","as these approaches treat the concept 'commonsense knowledge' as a black box, we are not clear about why they can do better (e.g., can these models understand commonsense or they just capture the statistical bias of the dataset) and do not know how to further improve them.",contrasting test_243,"For evaluation purposes, we may have labeled documents in the target language.",they are only used during the test period.,contrasting test_244,"For the last layer before softmax, even though XLM-FT also generates reasonable representations to separate positive and negative reviews, the data points are scattered randomly.",our model's output in the lower right panel of Figure 3 shows two more obvious clusters with corresponding labels that can be easily separated.,contrasting test_245,"This rating method, also known as Likert scale or Mean Opinion Score, is known to have two major drawbacks (Ye and Doermann, 2013): (1) Absolute rating is often treated as if it produces data on an interval scale.","assessors rarely perceive labels as equidistant, thus producing only ordinal data.",contrasting test_246,"Overall, the Bradley-Terry model appears to be a promising candidate for our purposes: its robustness and statistical properties have been studied in great detail (Hunter, 2004), and it can be efficiently computed (Chen et al., 2013).","an alternative offline sampling method has to be formulated, which we introduce in the following section.",contrasting test_247,"This can be generalized to higher step sizes s: for instance, if s = 2, all items that are separated by two positions around the ring are compared.","this strategy suffers from the major drawback that for some step sizes, the resulting graph has multiple unconnected components, thus violating the restriction that the comparison matrix must form a strongly connected graph.",contrasting test_248,Using a higher temperature yields a softer attention distribution.,a sharper attention distribution might be more suitable for NER because only a few tokens in the sentence are named entities.,contrasting test_249," In this work, we follow Chen et al. (2019) and use exactly the same functions.","as shown in 7 (c), understanding this statement requires the function of difference time, which is not covered by the current set.",contrasting test_250,"Word-level attacking, which can be regarded as a combinatorial optimization problem, is a well-studied class of textual attack methods.","existing word-level attack models are far from perfect, largely because unsuitable search space reduction methods and inefficient optimization algorithms are employed.",contrasting test_251,"Nonetheless, its attack validity rates against BiLSTM and BERT on SST-2 dramatically fall to 59.5% and 56.5%.","ours are 70.5% and 72.0%, and their differences are significant according to the results of significance tests in Appendix D. In this section, we conduct detailed decomposition analyses of different word substitution methods (search space reduction methods) and different search algorithms, aiming to further demonstrate the advantages of our sememe-based word substitution method and PSO-based search algorithm.",contrasting test_252,Mixed counting models that use the negative binomial distribution as the prior can well model over-dispersed and hierarchically dependent random variables; thus they have attracted much attention in mining dispersed document topics.,the existing parameter inference method like Monte Carlo sampling is quite time-consuming.,contrasting test_253,"On the one hand, NVI-based models are fast and easy to estimate but hard to interpret.",document modeling via mixed counting models is easy to interpret but difficult to infer.,contrasting test_254,"Such a framework learns the distribution of input data well, enabling it to combine with the traditional probability graphical models (e.g., LDA) and infer model parameters quickly (Srivastava and Sutton, 2017).",how to effectively integrate the distributed dependencies in mixed counting models into the framework of variational inference is still quite a challenging problem.,contrasting test_255,"From these results, we can observe that the proportion of topics obtained by NB-NTM is close to the topic-word number distribution.",gNB-NTM obtains more dispersed proportions of topics than NB-NTM.,contrasting test_256,"Everything was a point in a vector space, and everything about the nature of language could be learned from data.",most computational linguists had linguistic theories and the poverty-of-thestimulus argument.,contrasting test_257,And deep learning allows many aspects of these structured representations to be learned from data.,successful deep learning architectures for natural language currently still have many handcoded aspects.,contrasting test_258,"(Alex et al., 2007) propose several different modeling techniques (layering and cascading) to combine multiple CRFs for nested NER.",their approach cannot handle nested entities of the same entity type.,contrasting test_259,"Here, the two central entities (both indicate the city Liverpool) have similar sizes of neighborhoods and three common neighbors.","the three common neighbors (indicate United Kingdom, England and Labour Party (UK), respectively) are not discriminative enough.",contrasting test_260,"Most similar to our work are (Chen et al., 2019) and (Ghosh et al., 2019), as both studies are considered using the concept of question answering to address NLVL.","both studies do not explain the similarity and differences between NLVL and traditional span-based QA, and they do not adopt the standard span-based QA framework.",contrasting test_261,"By treating input video as text passage, the above frameworks are all applicable to NLVL in principle.",these frameworks are not designed to consider the differences between video and text passage.,contrasting test_262,"Figure 9 shows that VSLBase tends to predict longer moments, e.g., more samples with length error larger than 4 seconds in Charades-STA or 30 seconds in Activ-ityNet.","constrained by QGH, VSLNet tends to predict shorter moments, e.g., more samples with length error smaller that -4 seconds in Charades-STA or -20 seconds in ActivityNet Caption.",contrasting test_263,"A very recent work makes use of attention over spans instead of syntactic distance to inject inductive bias to language models (Peng et al., 2019).",the time complexity of injecting supervision is much higher than distancebased approach (O(n 2 ) VS O(n) ).,contrasting test_264,"For example, all punctuation symbols are removed, all characters are lower-cased, the vocabulary size is truncated at 10,000 and all sentences are concatenated.","this version of PTB discards the parse tree structures, which makes it unsuitable for comparing sequential language models with those utilizing tree structures.",contrasting test_265,"Because of the sequential and parallel nature of our model, it can directly inherit and benefit from this set of tricks.",it is non-trivial to use them for RNNG and URNNG.,contrasting test_266,"In particular, Kim et al. (2019a) report that unsupervised URNNG achieves 45.4 WSJ F1 in a similar setting, while another URNNG that finetunes a supervised RNNG model gives a much better F1 of 72.8, leading a 27.4 F1 improvement.",the F1 of our structure prediction trees is 61.3 in unbiased algorithm.,contrasting test_267,But this strategy does not consider that it is properly difficult to acquire a dictionary with high quality for a brand new domain.,we develop a simple and efficient strategy to perform domain-specific words mining without any predefined dictionaries.,contrasting test_268,"In some languages, predicting declension class is argued to be easier if we know the noun's phonological form (Aronoff, 1992;Dressler and Thornton, 1996) or lexical semantics (Carstairs-McCarthy, 1994;Corbett and Fraser, 2000).","semantic and phonological clues are, at best, only very imperfect hints as to class (Wurzel, 1989;Harris, , 1992Aronoff, 1992;Halle and Marantz, 1994;Corbett and Fraser, 2000;Aronoff, 2007).",contrasting test_269,"Until now, we have assumed a one-to-one mapping between paradigm slots and surface form changes.",different EDIT TREES may indeed represent the same inflection.,contrasting test_270,"Our system, thus, needs to learn to apply the correct transformation for a combination of lemma and paradigm slot.","mapping lemmas and paradigm slots to inflected forms corresponds exactly to the morphological inflection task, which has been the subject of multiple shared tasks over the last years (Cotterell et al., 2018).",contrasting test_271,"Systems for supervised or semi-supervised paradigm completion are commonly being evaluated using word-level accuracy (Dreyer and Eisner, 2011; Cotterell et al., 2017).","this is not possible for our task because our system cannot access the gold data paradigm slot descriptions and, thus, does not necessarily produce one word for each ground-truth inflected form.",contrasting test_272,"On the one hand, applying more than one iteration of additional lemma retrieval impacts the results only slightly, as those lemmas are assigned very small weights.","we see performance differences > 2% between PCS-III-C and PCS-III-H for DEU, MLT, and SWE.",contrasting test_273,All these approaches first find a set of semantically similar sentences.,finding isolated similar sentences are not enough to construct a dialog utterances' paraphrase.,contrasting test_274,"Under the guidance of the reasoning chain, we learn a neural QG model to make the result satisfy the logical correspondence with the answer.","the neural model is data-hungry, and the scale of training data mostly limits its performance.",contrasting test_275,"Recently, the character-word lattice structure has been proved to be effective for Chinese named entity recognition (NER) by incorporating the word information.","since the lattice structure is complex and dynamic, most existing lattice-based models are hard to fully utilize the parallel computation of GPUs and usually have a low inference-speed.",contrasting test_276,"For example, queries ""red nike running shoes"", ""running nike shoes, red"" and ""red running shoes nike"" all refer to the same general product, despite differing in structure.","item titles are structured, with brand, size, color, etc. all mentioned in a long sequence, which is also not how a conventional sentence is structured.",contrasting test_277,"In this case, Q gen is similar to Q in that the item is somewhat related to Q gen , and there's a chance that I may be matched to Q gen due to keyword stuffing by sellers, or poor semantic matching.","another mismatched query Q gen = pizza cutter is not a good candidate to generate, since it's highly unlikely that a reasonable search engine will show shoes for a query about pizza cutters.",contrasting test_278,"Wu el al. (2019) applied dynamic convolutions using shared softmax-normalized filters of depth-wise on GLU-regulated inputs within a fixed reception field rather than global contexts, challenging the common self-attention-dominated intuition.","all of the models, as mentioned earlier, adopt stacked CNNs rather than self-attention networks (SAN) to attend to the global contexts.",contrasting test_279,"It can be also quantified by other manners, e.g. estimating the data likelihood with Monte Carlo approximation (Der Kiureghian and Ditlevsen, 2009) or validating the translation distribution using a well-trained NMT model (Zhang et al., 2018).","to these time-consuming techniques, LM marginally increases the computational cost and easy to be applied, conforming to the original motivation of CL.",contrasting test_280,"Since the speaker information is indispensable for coreference resolution, previous methods (Wiseman et al., 2016;Lee et al., 2017;Joshi et al., 2019a) usually convert the speaker information into binary features indicating whether two mentions are from the same speaker.",we use a straightforward strategy that directly concatenates the speaker's name with the corresponding utterance.,contrasting test_281,"Comparing with existing models (Lee et al., 2017Joshi et al., 2019b), the proposed question answering formalization has the flexibility of retrieving mentions left out at the mention proposal stage.","since we still have the mention proposal model, we need to know in which situation missed mentions could be retrieved and in which situation they cannot.",contrasting test_282,The one-one target-source alignment 2(a) is the ideal condition of the projection.,"there could be many-to-one cases for the given words, leading to semantic role conflicts at the target language words.",contrasting test_283,"Metrics which measure the word-level overlap like BLEU (Papineni et al., 2002) have been widely used for dialogue evaluation.","these metrics do not fit into our setting well as we would like to diversify the response generation with an external corpus, the generations will inevitably differ greatly from the ground-truth references in the original conversational corpus.",contrasting test_284,"The second class seeks to bring in extra information into existing corpus like structured knowledge (Zhao et al., 2018;Ghazvininejad et al., 2018;Dinan et al., 2019), personal information (Li et al., 2016b;Zhang et al., 2018a) or emotions (Shen et al., 2017b;Zhou et al., 2018).",corpus with such annotations can be extremely costly to obtain and is usually limited to a specific domain with small data size.,contrasting test_285,"The user requests to book one ticket in the second example, yet both HDSA and Human Response ask about the number once again.",our model directly answers the questions with correct information.,contrasting test_286,"One limitation of ReGAT (Li et al., 2019) lies in the fact that it solely consider the relations between objects in an image while neglect the importance of text information.",our DC-GCN simultaneously capture visual relations in an image and textual relations in a question.,contrasting test_287,"Figure 6a shows that in terms of validation perplexity, MDR and FB perform very similarly across target rates.",figure 6b shows that at the end of training the difference between the target rate and the validation rate is smaller for MDR.,contrasting test_288,Previous conversational QA datasets provide the relevant document or passage that contain the answer of a query.,"in many real world scenarios such as FAQs, the answers need to be searched over the whole document collection.",contrasting test_289,"For instance, all the span-based QA datasets, except CQ (Bao et al., 2016), contain more than 100k samples.","the data size of most existing MCQA datasets are far less than 100k (see Table 1), and the smallest one only contains 660 samples.",contrasting test_290,"On the one hand, technical proposals as pre-trained embeddings, finetuning, and end-to-end modeling, have advanced NLP greatly.","neural advances often overlook MRL complexities, and disregard strategies that were proven useful for MRLs in the past.",contrasting test_291,"Document-level information extraction requires a global understanding of the full document to annotate entities, their relations, and their saliency.",annotating a scientific article is timeconsuming and requires expert annotators.,contrasting test_292,One common characteristic of most of the tasks is that the texts are not restricted to some rigid formats when generating.,"we may confront some special text paradigms such as Lyrics (assume the music score is given), Sonnet, SongCi (classical Chinese poetry of the Song dynasty), etc.",contrasting test_293,"Our model can still generate high quality results on the aspects of format, rhyme as well as integrity.","for corpus Sonnet, even though the model can generate 14 lines text, the quality is not as good as SongCi due to the insufficient training-set (only 100 samples).",contrasting test_294,Our relevance framework is partially inspired by the local components matching which we apply here to model the relevance of the components of the model's inputs.,our work differs in several significant ways.,contrasting test_295,VisualBERT and CMR have a similar cross-modality alignment approach.,visualBERT only uses the Transformer representations while CMR uses the relevance representations.,contrasting test_296,"As we increase the number of layers in the visual Transformer and the cross-modality Transformer, it tends to improve accuracy.",the performance becomes stable when there are more than five layers.,contrasting test_297,"An agent needs to perform a functional communication task in a natural language (in this work, English).","examples of linguistic communication about this functional task are not available -the only natural language data that can be used consist of examples of generic natural language, which are not grounded in the functional task.",contrasting test_298,"Single-headed cross attention speeds up decoding: Despite removing learned self-attention from both the encoder and decoder, we did not observe huge efficiency or speed gains.",reducing the source attention to just a single head results in more significant improvements.,contrasting test_299,We do find that the largest improvement in WinoMT accuracy consistently corresponds to the model predicting male and female entities in the closest ratio (see Appendix A).,"the best ratios for models adapted to these datasets are 2:1 or higher, and the accuracy improvement is small.",contrasting test_300,Ribeiro et al. (2018) test for comprehension of minimally modified sentences in an adversarial setup while trying to keep the overall semantics the same.,we investigate large changes of meaning (negation) and context (mispriming).,contrasting test_301,"Clustering of such short text streams has thus gained increasing attention in recent years due to many real-world applications like event tracking, hot topic detection, and news recommendation (Hadifar et al., 2019).","due to the unique properties of short text streams such as infinite length, evolving patterns and sparse data representation, short text stream clustering is still a big challenge (Aggarwal et al., 2003;Mahdiraji, 2009).",contrasting test_302,"The similarity-based text clustering approaches usually follow vector space model (VSM) to represent the cluster feature space (Din and Shao, 2020).",a topic needs to be represented as the subspace of global feature space.,contrasting test_303,"Human judges show surprisingly inferior performance on user profiling tasks, grounding their judgement in topical stereotypes (Carpenter et al., 2017).","albeit more accurate thanks to capturing stylistic variation elements, statistical models are prone to stereotype propagation as well (Costa-jussa et al., 2019; Koolen and van Cranenburgh, 2017).",contrasting test_304,"For the foreseeable future, legal decision-making will be the province of lawyers, not AI.",one plausible use for MRC in a legal setting is as a screening tool for helping non-lawyers determine whether a case has enough merit to bother bringing in a lawyer.,contrasting test_305,"Recent work has shown gains by improving the distribution of masked tokens , the order in which masked tokens are predicted (Yang et al., 2019), and the available context for replacing masked tokens (Dong et al., 2019).","these methods typically focus on particular types of end tasks (e.g. span prediction, generation, etc.), limiting their applicability.",contrasting test_306,"We aim, as much as possible, to control for differences unrelated to the pre-training objective.",we do make minor changes to the learning rate and usage of layer normalisation in order to improve performance (tuning these separately for each objective).,contrasting test_307,"Bidirectional encoders are crucial for SQuAD As noted in previous work (Devlin et al., 2019), just left-to-right decoder performs poorly on SQuAD, because future context is crucial in classification decisions.",bART achieves similar performance with only half the number of bidirectional layers.,contrasting test_308,"Unsurprisingly, model output is fluent and grammatical English.","outputs are also highly abstractive, with few copied phrases.",contrasting test_309,One of the issues in the original ON-LSTM is that the master gates and the model-based importance score for each word are only conditioned on the word itself and the left context encoded in the previous hidden state.,"in order to infer the importance for a word in the overall sentence effectively, it is crucial to have a view over the entire sentence (i.e., including the context words on the right).",contrasting test_310,"On the one hand, as GCN is directly dependent on the syntactic structures of the input sentences, it would not be able to learn effective representations for the sentences with new structures in the GCN-failure examples for RE.","as CEON-LSTM only exploits a relaxed general form of the tree structures (i.e., the importance scores of the words), it will be able to generalize better to the new structures in the GCN-failure examples where the general tree form is still helpful to induce effective representations for RE.",contrasting test_311,ELMo down-samples the outputs of its convolutional layers by max-pooling over the feature maps.,this operation is not ideal to adapt to new morphological patterns from other languages as the model tends to discard patterns from languages other than English.,contrasting test_312,The soft gazetteer features we propose instead take advantage of existing limited gazetteers and English knowledge bases using lowresource EL methods.,"to typical binary gazetteer features, the soft gazetteer feature values are continuous, lying between 0 and 1.",contrasting test_313,"In addition, CGExpan-NoCN outperforms most baseline models, meaning that the pre-trained LM itself is powerful to capture entity similarities.","it still cannot beat CGExpan-NoFilter model, which shows that we can properly guide the set expansion process by incorporating generated class names.",contrasting test_314,"HINT proposes a ranking loss between humanbased importance scores (Das et al., 2016) and the gradient-based sensitivities.",sCR does not require exact saliency ranks.,contrasting test_315,"As observed by Selvaraju et al. (2019) and as shown in Fig. 2, we observe small improvements on VQAv2 when the models are fine-tuned on the entire train set.","if we were to compare against the improvements in VQA-CPv2 in a fair manner, i.e., only use the instances with visual cues while fine-tuning, then, the performance on VQAv2 drops continuously during the course of the training.",contrasting test_316,"In the original paper, Das et al. (2017) find that models which structurally encode dialog history, such as Memory Networks (Bordes et al., 2016) or Hierarchical Recurrent Encoders (Serban et al., 2017) improve performance.","""naive"" history modelling (in this case an encoder with late fusion/concatenation of current question, image and history encodings) might actually hurt performance.",contrasting test_317,"In this paper, we show that competitive results on VisDial can indeed be achieved by replicating the top performing model for VQA (Yu et al., 2019b) -and effectively treating visual dialog as multiple rounds of question-answering, without taking history into account.","we also show that these results can be significantly improved by encoding dialog history, as well as by fine-tuning on a more meaningful retrieval metric.",contrasting test_318,"Other visual dialog tasks, such as GuessWhich? (Chattopadhyay et al., 2017) and GuessWhat?! (De Vries et al., 2017) take place in a goal-oriented setting, which according to Schlangen (2019), will lead to data containing more natural dialog phenomena.","there is very limited evidence that dialog history indeed matters for these tasks (Yang et al., 2019).",contrasting test_319,"The combined features approach models the relationships between the different features explicitly, but the large target spaces for morphologically rich languages further increase sparsity.","separate feature modeling guarantees smaller target spaces for the individual features, but the hard separation between the features prevents modeling any interfeature dependencies.",contrasting test_320,"The results are lower, both for MSA and EGY.","the result for MSA is very close to the (Zalmout and Habash, 2017) baseline, which uses separate feature models (with the analyzer).",contrasting test_321,"Although increases exist across all domains, these are most prominent in domains like TC (+5.36) that have a low density of named entities and where indomain models have access to limited amounts of data.","the in-domain performance is better than the pooled method of training, which shows consistent drops in performance on some domains (-8.69 on WB, -6.77 on BC, -1.98 on CoNLL), where information from other domains did not benefit the model.",contrasting test_322,"Our proposed model architecture takes 0.15 ms (33% increase) longer for inference than InDomain or PoolDomain models, which is a result of more model parameters.",our proposed architecture is still 0.19 ms faster than using the InDomain+DomainClassifier approach.,contrasting test_323,"State-of-the-art approaches for attribute value extraction (Zheng et al., 2018; Xu et al., 2019; Rezk et al., 2019) have employed deep learning to capture features of product attributes effectively for the extraction purpose.",they are all designed without considering the product categories and thus cannot effectively capture the diversity of categories across the product taxonomy.,contrasting test_324,The CT and CAT models learn to map source code and natural language tokens into a joint embedding space such that semantically similar code-natural language pairs are projected to vectors that are close to each other.,"these two representations interact only in the final step when the global similarity of the sequence embeddings is calculated, but not during the first step when each sequence is encoded into its corresponding embedding.",contrasting test_325,"Currently, the TClda are trained on student essays, while the TCpr only works on the source article.",TCattn uses both student essays and the source article for TC generation.,contrasting test_326,"Entity linking systems consider three sources of information: 1) similarity between mention strings and names for the KB entity; 2) comparison of the document context to information about the KB entity (e.g. entity description); 3) information contained in the KB, such as entity popularity or inter-entity relations","to the dense KBs in entity linking, concept linking uses sparse ontologies, which contain a unique identifier (CUI), title, and links to synonyms and related concepts, but rarely longform text.",contrasting test_327,We ran experiments that padded the names with synonyms or other forms of available text within the knowledge base.,we did not see consistent improvements.,contrasting test_328,"Depending on the application, a less accurate but faster linker might be a better choice (e.g. for all clinical notes at a medical institution).","a more complex linker, such as ours, maybe a better option for specific subsets of notes that require better accuracy (e.g., the results of specific clinical studies).",contrasting test_329,"In natural language processing, Natural Language Inference (NLI)-a task whereby a system determines whether a pair of sentences instantiates in an entailment, a contradiction, or a neutral relation-has been useful for training and evaluating models on sentential reasoning.","linguists and philosophers now recognize that there are separate semantic and pragmatic modes of reasoning (Grice, 1975; Clark, 1996; Beaver, 1997; Horn and Ward, 2004; Potts, 2015), and it is not clear which of these modes, if either, NLI models learn",contrasting test_330,"For example, the Independent model will produce identical scores for each output label, if it chooses to completely ignore the input explanations.",the model is still free to learn a different kind of bias which is an outcome of the fact that natural language explanations convey ideas through both content and form.,contrasting test_331,The results demonstrate a much weaker link between NILE-NS's predictions and associated explanations.,nILE behaves more expectedly.,contrasting test_332,"This led folk wisdom to suggest that modeling higher-order features in a neural parser would not bring additional advantages, and nearly all recent research on dependency parsing was restricted to first-order models (Dozat and Manning, 2016; Smith et al., 2018a). Kulmizev et al. (2019) further reinforced this belief comparing transition and graph-based decoders (but none of which higher order); Falenska and Kuhn (2019) suggested that higher-order features become redundant because the parsing models encode them implicitly.",there is some evidence that neural parsers still benefit from structure modeling.,contrasting test_333,"This approach, named adversarial training (AT), has been reported to be highly effective on image classification (Goodfellow et al., 2015), text classification (Miyato et al., 2017), as well as sequence labeling (Yasunaga et al., 2018).","aT is limited to a supervised scenario, which uses the labels to compute adversarial losses.",contrasting test_334,"To apply the conventional VAT on a model with CRF, one can calculate the KL divergence on the label distribution of each token between the original examples and adversarial examples.",it is sub-optimal because the transition probabilities are not taken into account.,contrasting test_335,"VAT achieved state-of-the-art performance for image classification tasks (Miyato et al., 2019), and proved to be more efficient than traditional semi-supervised approaches, such as entropy minimization (Grandvalet and Bengio, 2004) and selftraining (Yarowsky, 1995), from a recent study (Oliver et al., 2018).","despite the successful applications on text classification (Miyato et al., 2017), VAT has not shown great benefits to semi-supervised sequence labeling tasks, due to its incompatibility with CRF.",contrasting test_336,"With multiple layers, SpellGCN can aggregate the information in more hops and therefore, achieve better performance.",the F1score drops when the number of layers is larger than 3.,contrasting test_337,"The evident way to construct a corpus with NL questions and their corresponding OT queries would consist of two main parts: first, collect a set of NL questions, and then create the corresponding OT queries to these questions.",this approach is very time-consuming and has a major issue.,contrasting test_338,"On the other hand, LC-QuaD 2.0 contains an average of 2 hops (equivalent to two joins in relational databases) per query, which lies in the nature of graph database queries that are optimized for handling queries that range over multiple triple patterns.","lC-QuaD 2.0 lacks complexity when considering more complex components (e.g., Group By, Set-Operation, etc.).",contrasting test_339,Table 2 is a comparison of existing English MWP corpora.,"these existing corpora are either limited in terms of the diversity of the associated problem types (as well as lexicon usage patterns), or lacking information such as difficulty levels.",contrasting test_340,"Likewise, we could also enlarge the training-set by duplicating MWPs without affecting the CLD value against the test-set.",it would be also meaningless as no new information would be provided.,contrasting test_341,"Because this type of evaluation is typically task-specific, it can be conducted in multilingual settings.","training a range of task-specific multilingual models might require significant resources, namely, training time and computational power.",contrasting test_342,"Typelevel probing tasks have the advantage of containing less bias (domain, annotator, and majority class); whereas token-level tests might be sensitive to the domain biases from the underlying full-text data.","token-level tests have the advantage of being more lexically diverse; whereas type-level tasks can be less diverse for some languages like Spanish, French, and English.",contrasting test_343,"Unlike previously introduced embedding models, ELMo provides contextualized embeddings, that is, the same words would have different representations when used in different contexts.","our probing tests are type-level (as opposed to token-level), thus we only use the representations generated independently per each token both for the intrinsic and extrinsic experiments.",contrasting test_344,"Another potential reason for the difference in ranking is the domain of the data underlying the respective data sets: For the majority of the languages, POS, DEP, and SRL data originates from the same treebanks and has gold (expert) annotations.","nER and XnLI data sets are generally compiled from a different, and often diverse set of resources.",contrasting test_345,The abstract syntax can be vastly different in domains ranging from mathematics to tourist phrasebooks.,"the linguistic mechanisms needed in the concrete syntax-morphology, agreement, word order-are largely the same in all areas of discourse.",contrasting test_346,"For this purpose, it is enough to express all desired content in one way: One does not need to cover all possible ways to express things.","in wide-coverage parsing, this is a serious limitation.",contrasting test_347,"This led to a series of extensions as described in Section 4.5, first meant to cover the missing English structures.","if the ultimate goal is to build an interlingual grammar, the structures designed for English are not necessarily adequate for other languages-in particular, they might not allow for compositional linearizations.",contrasting test_348,"There, every synset corresponds to one abstract function and then the function's linearization in each language produces all words in the language as variants.",a more detailed analysis shows that this is not ideal.,contrasting test_349,"Our focus in this paper has been on recognizing valid chains of reasoning, assuming a retrieval step that retrieves a reasonable pool of candidates to start with (Section 3.2).","the retrieval step itself is not perfect: For QASC, designed so that at least one valid chain always exists, the retrieved pool of 10 contains no valid chains for 24% of the questions (upper bound in Table 2), capping the overall system's performance.",contrasting test_350,"Multitask learning (Caruana, 1997;Collobert and Weston, 2008) seeks to learn a single model that can solve multiple tasks simultaneously, similar to our framework that seeks to learn a model that can solve many tasks.","in multitask learning each task is learned from examples, and the model is not able to generalize to unseen tasks.",contrasting test_351,Attribution of natural disasters/collective misfortune is a widely-studied political science problem.,"such studies typically rely on surveys, expert opinions, or external signals such as voting outcomes.",contrasting test_352,"For instance, the most-recent PEW survey (Pew) focused on India was conducted in 2018 on only 2,521 users.","our data set consists of comments from 43,859 users.",contrasting test_353,"We observe that, on the detection task, all the BERT based models perform similarly.","on the resolution task, the F1 score substantially improves as we keep adding sophistication to our model architecture.",contrasting test_354,"Third, a more comprehensive evaluation methodology would consider both the exact-match accuracy and the execution-match accuracy, because two logic forms can be semantically equivalent yet do not match precisely in their surface forms.","as shown in Table 1, most existing work is only evaluated with the exact-match accuracy.",contrasting test_355,"Consequently, the execution engines of domain-specific MRs need to be significantly customized for different domains, requiring plenty of manual efforts.",sQL is a domain-general MR for querying relational databases.,contrasting test_356,"There is a predicate tomorrow in all three domainspecific MRs, and this predicate can directly align to the description in the utterance.","one needs to explicitly express the concrete date values in the SQL query; this requirement can be a heavy burden for neural approaches, especially when the values will change over time.",contrasting test_357,"Unfortunately, this integral is intractable due to the complex relationship between X and Z.",related latent variable models like variational autoencoders (VAEs; Kingma and Welling (2013)) learn by optimizing a variational lower bound on the log marginal likelihood.,contrasting test_358,"For instance, we could feed both sentences into the semantic encoder and pool their representations.",in practice we find that alternating works well and also can be used to obtain sentence embeddings for text that is not part of a translation pair.,contrasting test_359,"This model is similar to Infersent (Conneau et al., 2017) in that it is trained on natural language inference data, SNLI (Bowman et al., 2015).","instead of using pretrained word embeddings, they fine-tune BERT in a way to induce sentence embeddings.",contrasting test_360,"Since BGT W/O LANGVARS also has significantly better performance on these tasks, most of this gain seems to be due to the prior having a regularizing effect.","bGT outperforms bGT W/O LANGVARS overall, and we hypothesize that the gap in performance between these two models is due to bGT being able to strip away the language-specific information in the representations with its languagespecific variables, allowing for the semantics of the sentences to be more directly compared.",contrasting test_361,"Japanese is a very distant language to English both in its writing system and in its sentence structure (it is an SOV language, where English is an SVO language).","despite these difference, the semantic encoder strongly outperforms the English language-specific encoder, suggesting that the underlying meaning of the sentence is much better captured by the semantic encoder.",contrasting test_362,We can observe that the inherent data imbalance problem also exists in MAVEN.,"as MAVEN is large-scale, 41% and 82% event types have more than 500 and 100 instances respectively.",contrasting test_363,"Most recently, continuous improvements have been achieved by combining multiple kinds of information in KGs or using more sophisticated embedding models.",the performances of most approaches are still not satisfactory.,contrasting test_364,"To generate the PCG of two KGs, we can first pair all the entities from two KGs as nodes, and then use Equation 1 to generate edges between nodes.","kGs usually contain large number of entities, the PCG of two large-scale kGs will contain huge number of nodes.",contrasting test_365,"Recent studies on single-document summarization (SDS) benefit from the advances in neural sequence learning (Nallapati et al., 2016;See et al., 2017;Chen and Bansal, 2018;Narayan et al., 2018) as well as pretrained language models (Liu and Lapata, 2019;Lewis et al., 2019;Zhang et al., 2020) and make great progress.","in multi-document summarization (MDS) tasks, neural models are still facing challenges and often underperform classical statistical methods built upon handcrafted features (Kulesza and Taskar, 2012).",contrasting test_366,"One extension (Cho et al., 2019) of these studies uses capsule networks (Hinton et al., 2018) to improve redundancy measures.",its capsule networks are pre-trained on SDS and fixed as feature inputs of classical methods without end-to-end representation learning.,contrasting test_367,"Compared to hard cutoff, our soft attention favors top-ranked candidates of the sentence ranker (MMR).","it does not discard low-ranked ones, as the ranker is imperfect, and those sentences ranked low may also contribute to a high-quality summary.",contrasting test_368,"RL-MMR has a more salient and non-redundant summary, as it is end-to-end trained with advances in SDS for sentence representation learning while maintaining the benefits of classical MDS approaches.",mmR alone only considers lexical similarity; The redundancy mea-sure in DPP-Caps-Comb is pre-trained on one SDS dataset with weak supervision and fixed during the training of DPP.,contrasting test_369,The authors considered several predefined probe architectures and picked one of them based on a manually defined criterion.,the variational code gives probe architecture as a byproduct of training and does not need human guidance.,contrasting test_370,"To this day, scientific publications still serve as a fundamental fixed-domain benchmark for neural KPE methods (Meng et al., 2017;Alzaidy et al., 2019;Sahrawat et al., 2019) due to the availability of ample data of this kind.","experiments have revealed that KPE methods trained directly on such corpora do not generalize well to other web-related genres or other types of documents (Chen et al., 2018;Xiong et al., 2019), where there may be far more heterogeneity in topics, content and structure, and there may be more variation in terms of where a key phrase may appear.",contrasting test_371,"Case #2 shows a similar situation where the model with visual features finds the proper keyphrases that are much larger in font size, while the text-only model selects nouns elsewhere.",case #3 demonstrates a typical kind of web page where visual features can be misleading: an indexing page.,contrasting test_372,"As a result, the relationships between entities are not captured.",since KB is naturally a graph structure (nodes are entities and edges are relations between entities).,contrasting test_373,"Moreover, structural knowledge such as dependency relationships has recently been investigated on some tasks (e.g., relation extraction) (Peng et al., 2017;Song et al., 2018) and shown to be effective in the model's generalizability.","such dependency relationships (essentially also graph structure) have not been explored in dialogue systems, again missing great potential for improvements.",contrasting test_374,"different predecessors in H ) should have different impacts on the output hidden state h t , and we expect our model to capture that.",the inputs may have different number of predecessors at different timesteps.,contrasting test_375,"We can observe that our model without the graph encoder has a 1.6% absolute value loss (over 25% in ratio) in BLEU score and a 1.1% absolute value loss (9.8% in ratio) in entity F1 on MultiWOZ 2.1, which suggests that the overall quality of the generated sentences are better improved by our graph encoder.",ours without knowledge graph means that we do not use the graph structure to store and retrieve the external knowledge data.,contrasting test_376,"These systems rely on offline (batch) training and have drawn recent criticism due to their inability to adapt to new contexts (Linzen, 2020).","humans acquire language from evolving environments, require a small memory footprint (Mc-Clelland et al., 1995), and can generalize their knowledge to newer tasks (Sprouse et al., 2013).",contrasting test_377,"However, these works are mainly designed for image classification tasks where the training data has ""clear"" task boundaries-i.e., training stream are partitioned into disjoint subsequences.","task boundaries in VisCOLL are unknown and ""smooth"" (i.e., with gradual transitions between tasks)-a setting that is closer to real-world situations.",contrasting test_378,"In particular, Li et al. (2020) study a closely related task of continual learning of sequence prediction for synthetic instruction following",their techniques for separating semantics and syntax is restricted to text-only case.,contrasting test_379,"Prior work uses only the article content and metadata including title, date, domain, and authors.",news articles often contain photos and captions as well.,contrasting test_380,"Despite the fact that leveraging metadata significantly improves the performance of Grover, it also appears that the accuracy does not vary much with the exclusion of different types of metadata.",we observe a surprising observation that leveraging all metadata causes the detection accuracy to decrease.,contrasting test_381,"Todd Frazier's sacrifice fly accounted for the first run before Jose Bautista drove in the next two with a line drive RBI single to right, and a bases-loaded single by Todd Frazier also scored a run.",deJong and Bader homered off Bobby Wahl to begin the Cardinals' comeback.,contrasting test_382,Each sub-problem is worthy of being standardized and continually studied given a well defined objective and data sets so that the performance could be fairly evaluated and the progress can be continually made.,"it is not easy in the current methodology, since each pipeline's strategies are closely bonded to own implementation.",contrasting test_383,"Finetuning a pretrained language model (Dai and Le, 2015;Howard and Ruder, 2018) often delivers competitive performance partly because pretraining leads to a better initialization across various downstream tasks than training from scratch (Hao et al., 2019).",finetuning on individual NLP tasks is not parameter-efficient.,contrasting test_384,Mallya et al. (2018) explicitly update weights in a task-specific classifier layer.,"we show that end-to-end learning of selective masks, consistently for both the pretrained language model and a randomly initialized classifier layer, achieves good performance.",contrasting test_385,Large-scale training datasets lie at the core of the recent success of neural machine translation (NMT) models.,the complex patterns and potential noises in the large-scale data make training NMT models difficult.,contrasting test_386,"Neural machine translation (NMT) is a data-hungry approach, which requires a large amount of data to train a well-performing NMT model (Koehn and Knowles, 2017).",the complex patterns and potential noises in the large-scale data make training NMT models difficult.,contrasting test_387,"Another stream is to schedule the order of training examples according to their difficulty, e.g., curriculum learning which has been applied to the training of NMT models successfully (Kocmi and Bojar, 2017; Zhang et al., 2018; Platanios et al., 2019; Liu et al., 2020b).","we explore strategies to simplify the difficult (i.e., inactive) examples without changing the model architecture and model training strategy.",contrasting test_388,"For the latter, NMT models tend to prefer a more typical alternative to a relatively rare but correct one (e.g., French ""Il"" is often wrongly translated to the more common ""it"" than ""he"" ).","However, these seemingly trivial errors can erode translation to the extent that they can be easily distinguishable from human-translated texts (Laubli et al. ¨ , 2018).",contrasting test_389,"Both of the discriminative losses essentially promote the probability of the positive (i.e., correct) sample.","the intuition behind using the additional loss over the standard loss is that the fine-tuning here focuses on improving the positive sample over the negative sample that the model has learnt to produce, rather than over the entire probability distribution over the full vocabulary.",contrasting test_390,"In the second example, we observe a biased anticipation case where the NMT system had to emit a wrong translation chien ('dog') before seeing the noun 'bird'.",the multimodal model successfully leveraged the visual context for anticipation and correctly handled the adjective-noun placement phenomenon.,contrasting test_391,"Our approach is most similar to Bjerva et al. (2019a), as they build a generative model from typological features and use language embeddings, extracted from factored language modelling at character-level, as a prior of the model to extend the language coverage.","our method primarily differs as it is mainly based in linear algebra, encodes information from both sources since the beginning, and can deal with a small number of shared entries (e.g. 23 from LW) to compute robust representations.",contrasting test_392,"We work with 53 languages pre-processed by (Qi et al., 2018), from where we mapped the ISO 639-1 codes to the ISO 693-2 standard.","we need to manually correct the mapping of some codes to identify the correct language vector in the URIEL (Littell et al., 2017) library: • zh (zho , Chinese macro-language) mapped to cmn (Mandarin Chinese).",contrasting test_393,"In German, we reach the maximum 0.41 when the number of words in each topic equals 2, and the minimum when it equals 100.","we observe the most noticeable changes when we vary the number of topics in French (Ousidhoum et al., 2019) such that B 1 = 0.34 when |T| = 2 versus 0.21 when |T| = 7 and back to 0.37 when |T| = 100.",contrasting test_394,"On the other hand, we observe the most noticeable changes when we vary the number of topics in French (Ousidhoum et al., 2019) such that B 1 = 0.34 when |T| = 2 versus 0.21 when |T| = 7 and back to 0.37 when |T| = 100.","we remark overall cohesion despite the change in topic numbers especially in the case of Italian and Portuguese caused by the limited numbers of search keywords, that equal 5 and 7 respectively.",contrasting test_395,"Waseem and Hovy (2016), Founta et al. (2018) and Ousidhoum et al. (2019) report using different keywords and hashtags to collect tweets.","the scores shown in Table 4 indicate that the datasets might carry similar meanings, specifically because WUP relies on hypernymy rather than common vocabulary use.",contrasting test_396,"On the other hand, the copy model (row 3) significantly improves the BLEU scores by 36.2-37.6 points, by learning to re-use words in input texts 4 .","it still suffers the small data size, and its outputs are worse than the original questions without any transformation (row 1).",contrasting test_397,"For system-level evaluation, metrics which can use the reference translations for quality estimation, such as BLEU, generally achieved consistently high correlation with human evaluation for all language pairs.",qE models (including our qE model and submitted systems for the qE as a Metric task) are not allowed to use the reference translations for quality estimation and tend to generate more unstable results: high correlation with human evaluation for some language pairs but very low or even negative Pearson correlation with human evaluation for some other language pairs.,contrasting test_398,"As a sequence-to-sequence generation task, neural machine translation (NMT) naturally contains intrinsic uncertainty, where a single sentence in one language has multiple valid counterparts in the other.",the dominant methods for NMT only observe one of them from the parallel corpora for the model training but have to deal with adequate variations under the same meaning at inference.,contrasting test_399,"Recently, there are increasing number of studies investigating the effects of quantifying uncertainties in different applications Kendall and Gal, 2017;Xiao and Wang, 2018;Zhang et al., 2019b,a;Shen et al., 2019).",most work in NMT has focused on improving accuracy without much consideration for the intrinsic uncertainty of the translation task itself.,contrasting test_400,when h tends to 0 our controlled sampling method achieves lowest BLEU scores but highest edit distances.,"if we increase h gradually, it can be quickly simplified to greedy search.",contrasting test_401,"A sequence-to-sequence (seq2seq) learning with neural networks empirically shows to be an effective framework for grammatical error correction (GEC), which takes a sentence with errors as input and outputs the corrected one.",the performance of GEC models with the seq2seq framework heavily relies on the size and quality of the corpus on hand.,contrasting test_402,"The former applies text editing operations such as substitution, deletion, insertion and shuffle, to introduce noises into original sentences, and the latter trains a clean-to-noise model for error generation.","the noise-corrupted sentences are often poorly readable, which are quite different from those made by humans.",contrasting test_403,"Once a vulnerable position is determined, the token at that position is usually replaced with one of its synonyms.",generating adversarial examples through such synonym-based replacement is no longer applicable to the GEC task.,contrasting test_404,Adversarial training by means of adding the adversarial examples into the training set can effectively improve the models' robustness.,"some studies show that the models tend to overfit the noises, and the accuracy of the clean data will drop if the number of adversarial examples dominates the training set.",contrasting test_405,"Indeed, LDA implicitly assumes that Ψ¨ = Unif(1, .., K) deterministically-i.e., that every topic is assumed a priori to contain the same number of tokens.",the HDP model learns this distribution from the data by letting Ψ ∼ GEM(γ).,contrasting test_406,Semisupervised methods that utilize such external corpora have been successful in English STS.,"the need for external corpora is a major obstacle when applying STS, a fundamental technology, to low-resource languages.",contrasting test_407,"One particularly promising usage of BERT-based models for unsupervised STS is BERTScore , which was originally proposed as an automatic evaluation metric.",our preliminary experiments 17 show that BERTScore performs poorly on unsupervised STS.,contrasting test_408,"From the figure, we find that overall the average accuracy raises when K increases from 2 to 8, which suggests the importance of disentangling components.","when K grows larger than 8, the performance starts to decline.",contrasting test_409,"As shown in Figure 6(b), except for the case of n = 1, the other settings have comparable performance.","it can be seen that when n = 4, the average accuracy on the last task is the highest, which indicates that the model has the strongest ability to avoid catastrophic forgetting problem when n = 4.",contrasting test_410,The subproblems are separately solved using existing techniques.,"existing unsupervised multilingual approaches (Chen and Cardie, 2018;Heyman et al., 2019;Alaux et al., 2019) solve the above subproblems jointly.",contrasting test_411,We also observe that UMWE fails at mapping Dutch language embeddings in the multilingual space even though Dutch is close to English.,"in a separate bilingual experiment, UMWE learns an effective English-Dutch crosslingual space (obtaining an average en-nl and nl-en score of 75.2).",contrasting test_412,"We observe that the proposed SL-GeoMM learns a highly effective multilingual space and obtains the best overall result, illustrating its robustness in this challenging setting.",other multilingual approaches fail to learn a reasonably good multilingual space.,contrasting test_413,"In recent years, pre-trained language models, such as GPT (Radford et al., 2018), BERT (Devlin et al., 2018), XL-Net (Yang et al., 2019), have been proposed and applied to many NLP tasks, yielding state-of-the-art performances.","the promising results of the pre-trained language models come with the high costs of computation and memory in inference, which obstruct these pre-trained language models to be deployed on resource-constrained devices and real-time applications.",contrasting test_414,"Concretely, both the embedding-layer distillation and the prediction-layer distillation employ the one-to-one layer mapping as in TinyBERT and BERT-PKD, where the two student layers are guided by the corresponding teacher layers, respectively.","different from the previous works, we propose to exploit the many-to-many layer mapping for Transformer (intermediate lay-ers) distillation (attention-based distillation and hidden states based distillation), where each student attention layer (resp. hidden layers).",contrasting test_415,"In this paper, we compare our BERT-EMD with several state-of-the-art BERT compression approaches, including the original 4/6-layer BERT models (Devlin et al., 2018), DistilBERT (Tang et al., 2019), BERT-PKD , Tiny-BERT (Jiao et al., 2019), BERT-of-Theseus (Xu et al., 2020).","the original TinyBERT employs a data augmentation strategy in the training process, which is different from the other baseline models.",contrasting test_416,"Previous works have shown that as the number of retrieved passages increases, so does the performance of the reader.","they assume all retrieved passages are of equal importance and allocate the same amount of computation to them, leading to a substantial increase in computational cost.",contrasting test_417,Open-Domain Question Answering (ODQA) requires a system to answer questions using a large collection of documents as the information source.,"to context-based machine comprehension, where models are to extract answers from single paragraphs or documents, it poses a fundamental technical challenge in machine reading at scale (Chen et al., 2017) .",contrasting test_418,"Models for reading comprehension (RC) commonly restrict their output space to the set of all single contiguous spans from the input, in order to alleviate the learning problem and avoid the need for a model that generates text explicitly.","forcing an answer to be a single span can be restrictive, and some recent datasets also include multi-span questions, i.e., questions whose answer is a set of non-contiguous spans in the text.",contrasting test_419,"When the answer spans appear only once in the input, this is simple, since the ground-truth tagging is immediately available.",there are many cases where a given answer span appears multiple times in the input.,contrasting test_420,"For query generation, prior research has focused mostly on extending standard Seq2Seq models where the input is a concatenation of earlier queries a user has submitted in a session (Sordoni et al.,Figure 1: An example search session where a user issues queries and optionally performs clicking at timestamps 1 to n. At time n+1, the user issues q n+1 following the previous search context of length n. 2015; Dehghani et al., 2017).","literature often leaves out the influence of clickthrough actions (i.e., red blocks in Figure 1), which we argue should be taken into account in the generative process as they could be surrogates of the user's implicit search intent (Yin et al., 2016).",contrasting test_421,"One of the widely exercised steps to establish a semantic understanding of social media is Entity Linking (EL), i.e., the task of linking entities within a text to a suitable concept in a reference Knowledge Graph (KG) (Liu et al., 2013;Yang and Chang, 2015;Yang et al., 2016; Ran et al., 2018).","it is well-documented that poorly composed contexts, the ubiquitous presence of colloquialisms, shortened forms, typing/spelling mistakes, and out-of-vocabulary words introduce challenges for effective utilisation of social media text (Baldwin et al., 2013;Michel and Neubig, 2018).",contrasting test_422,"As noted by Ethayarajh (2019) the deeper BERT goes, the more ""contextualized"" its representation becomes.",interpreting semantics of entities requires contextual knowledge in different degrees and always taking the last layer's output may not be the best solution.,contrasting test_423,"Compared to similar corpora, COMETA has the largest scale.",from a learning perspective the lack of sufficient regularity in the data could still leave its toll at test phase.,contrasting test_424,"Much of the recent progress in NLP is due to the transfer learning paradigm in which Transformerbased models first try to learn task-independent linguistic knowledge from large corpora, and then get fine-tuned on small datasets for specific tasks.","these models are overparametrized: we now know that most Transformer heads and even layers can be pruned without significant loss in performance (Voita et al., 2019;Kovaleva et al., 2019;Michel et al., 2019).",contrasting test_425,"There are recent works (Nguyen et al., 2019; Agarwal et al., 2020) also applying the Transformer to model the interactions among many entities.",their models neglect the important early interaction of the answer entity and cannot naturally leverage the pretrained language representations from BERT like ours.,contrasting test_426,"In the example, both the baseline model and the model with the sub-instruction module completes the task successfully.","unlike the baseline model which fails to follow the instruction and stops within 3 meters of the target by chance, our model correctly identifies the completeness of each sub-instruction, guides the agent to walk on the described path and eventually stops right at the target position.",contrasting test_427,"Although replacing a real user with a user simulator could address the issue, the simulator only roughly approximates real user statistics, and its development process is costly (Su et al., 2016).",humans could independently reason potential responses based on past experiences from the true environment.,contrasting test_428,"We propose a model-agnostic approach, COPT, that can be applied to any adversarial learning-based dialogue generation models.","to existing approaches, it learns on counterfactual responses inferred from the structural causal model, taking advantage of observed responses.",contrasting test_429,"As an important research issue in the natural language processing community, multi-label emotion detection has been drawing more and more attention in the last few years.","almost all existing studies focus on one modality (e.g., textual modality).",contrasting test_430,"This implies that the success of previous models may over-rely on the confounding non-target aspects, but not necessarily on the target aspect only.",no datasets can be used to analyze the aspect robustness more in depth.,contrasting test_431,"These two ratios should ideally be both 400%, because there are three generation strategies, plus one original sentence.",this gap is because not every original test sentence can qualify for every generation strategy.,contrasting test_432,"That is to say, whether one sentence could be selected depends on its salience and the redundancy with other selected sentences.",it is still difficult to model the dependency exactly.,contrasting test_433,Neural models have achieved remarkable success on relation extraction (RE) benchmarks.,there is no clear understanding which type of information affects existing RE models to make decisions and how to further improve the performance of these models.,contrasting test_434,"From the observations in Section 2, we know that both context and entity type information is beneficial for RE models.",in some cases RE models cannot well understand the relational patterns in context and rely on the shallow cues of entity mentions for classification.,contrasting test_435,Alt et al. (2020) also point out that there may exist shallow cues in entity mentions.,"there have not been systematical analyses about the topic and to the best of our knowledge, we are the first one to thoroughly carry out these studies.",contrasting test_436,"Other models selected operands first before constructing expression trees with operators in the second step (Roy et al., 2015; Roy and Roth, 2015).",such two-step procedures in these early attempts can be performed via a single-step procedure with neural models.,contrasting test_437,"It has received significant attention in question answering systems for structured data (Wang et al., 2015; Zhong et al., 2017; Yu et al., 2018b; Xu et al., 2020).","training a semantic parser with good accuracy requires a large amount of annotated data, which is expensive to acquire.",contrasting test_438,"Neural networks have the merits of convenient end-to-end training and good generalization, however, they typically need a lot of training data and are not interpretable.",logicbased expert systems are interpretable and require less or no training data.,contrasting test_439,A unique feature of this operationalisation of lexical ambiguity is that it is language independent.,"the quality of a possible approximation will vary from language to language, depending on the models and the data available in that language.",contrasting test_440,"If p(m | w) is concentrated in a small region of the meaning space (corresponding to a word with nuanced implementations of the same sense), the bound in eq. (13) could be relatively tight.",a word with several unrelated homophones would correspond to a highly structured p(m | w) (e.g. with multiple modes in far distant regions of the space) for which this normal approximation would result in a very loose upper bound.,contrasting test_441,Deep learning has led to significant improvement in text summarization with various methods investigated and improved ROUGE scores reported over the years.,gaps still exist between summaries produced by automatic summarizers and human professionals.,contrasting test_442,"In the research literature, human evaluation has been conducted as a complement (Narayan et al., 2018).",human evaluation reports that accompany ROUGE scores are limited in scope and coverage.,contrasting test_443,The above methods assign one score to each summarization output.,"to these methods, our errorcount based metrics are motivated by MQM for human writing, and are more fine-grained and informative.",contrasting test_444,"On PolyTope, as a representative of abstractive models, BART overwhelmingly outperforms the others (p < 0.01 using t-test).","excluding BART, extractive models take the following top three places.",contrasting test_445,"With respect to Accuracy, extractive methods are notably stronger in terms of Inacc Intrinsic and Extrinsic, which reflects that through directly copying snippets from the source, extractive methods are guaranteed to produce a summary with fair grammaticality, rationality and loyalty.","extractive methods do not show stronger performances in Addition and Omission, which is because extracted sentences contain information not directly relevant to the main points.",contrasting test_446,There is a high proportion in the first five sentences and a smooth tail over all positions for reference summaries.,"bertSumExt and SummaRuN-Ner extract sentences mostly from the beginning, thereby missing useful information towards the end.",contrasting test_447,"Compared with Point-Generator, Point-Generator-with-Coverage reduces Duplication errors from 68 to 11 and Omission errors from 286 to 256, proving that coverage is useful for better content selection.",point-Generator-with-Coverage yields more Addition and Inacc Intrinsic errors than point-Generator.,contrasting test_448,"As can be seen, abstractive models tend to neglect sentences in the middle and at the end of source documents (e.g., Bottom-Up, BertSumExtAbs), indicating that performance of abstractive summarizers is strongly affected by the leading bias of dataset.","bART can attend to sentences all around the whole document, slightly closer to the distribution of golden reference.",contrasting test_449,Their main goal is to verify the faithfulness and factuality in abstractive models.,"we evaluate both rule-based baselines and extractive/abstractive summarizers on 8 error metrics, among which faithfulness and factuality are included.",contrasting test_450,It has been conjectured that multilingual information can help monolingual word sense disambiguation (WSD).,"existing WSD systems rarely consider multilingual information, and no effective method has been proposed for improving WSD by generating translations.",contrasting test_451,Our first method extends the idea of Apidianaki and Gong (2015) to constrain S(e) based on sensetranslation mappings in BabelNet.,"instead of relying on a single translation, we incorporate multiple languages by taking the intersection of the individual sets of senses; that is, we rule out senses if their corresponding BabelNet synsets do not contain translations from all target languages.",contrasting test_452,Supervised systems are trained on sense-annotated corpora and generally outperform knowledge-based systems.,knowledge-based systems usually apply graph-based algorithms to a semantic network and thus do not require any sense-annotated corpora.,contrasting test_453,The ability to fuse sentences is highly attractive for summarization systems because it is an essential step to produce succinct abstracts.,"to date, summarizers can fail on fusing sentences.",contrasting test_454,Fusing two sentences together coherently requires connective phrases and sometimes requires rephrasing parts of sentences.,"higher abstraction does not mean higher quality fusions, especially in neural models.",contrasting test_455,Recent innovations in Transformer-based ranking models have advanced the state-ofthe-art in information retrieval.,"these Transformers are computationally expensive, and their opaque hidden states make it hard to understand the ranking process.",contrasting test_456,"One could argue that this is a superficial problem, as we can always give the model more free bits and decrease the loss in intermediary positions.","this is not so simple because increasing capacity leads to a worse model fit, as was noted by Alemi et al. (2018).",contrasting test_457,"While Yang et al. (2017) and Kim et al. (2018) both consider the use of pretrained LMs as encoders, the weights are not frozen such that it is hard to disentangle the impact of pretraining from subsequent training.",we freeze the weights so that the effect of pretraining can not be overridden.,contrasting test_458,Both baselines and variants have roughly similarly high agreement.,"our variants produce more diverse beginnings, while still managing to reproduce the topic or sentiment of the original document.",contrasting test_459,"Finally, verb-shuffling focuses on verbs as the salient element of an event, and should teach both principles of verb ordering and of verb suitability for context, and avoid artifacts from reordering arguments.","since verbs are shuffled naively, the task can in some cases be too easy due to differences in verb selectional preferences.",contrasting test_460,"In the vision and machine learning community, unsupervised induction of structured image representations (aka scene graphs or world models) has been receiving increasing attention (Eslami et al., 2016; Burgess et al., 2019; Kipf et al., 2020).",they typically rely solely on visual signal.,contrasting test_461,"Previous work has shown that incorporating natural language explanation into the classification training loop is effective in various settings (Andreas et al., 2018; Mu et al., 2020).","previous work neglects the fact that there is usually a limited time budget to interact with domain experts (e.g., medical experts, biologists) and high-quality natural language explanations are expensive, by nature.",contrasting test_462,"For example, an image with a Ringbilled gull has the description: ""This is a white bird with a grey wing and orange eyes and beak.""",this description also fits perfectly with a California gull (Figure 1).,contrasting test_463,Our results indicate that too many parameters can also harm multilinguality.,in practice it is difficult to create a model with so many parameters that it is overparameterized when being trained on 104 Wikipedias.,contrasting test_464,One might argue that our model 17 in Table 1 of the main paper is simply not trained enough and thus not multilingual.,table 10 shows that even when continuing to train this model for a long time no multilinguality arises.,contrasting test_465,This suggests that multilingual models can stimulate positive transfer for low-resource languages when monolingual models overfit.,"when we compare bilingual models on English, models trained using different sizes of fr/ru data obtain similar performance, indicating that the training size of the source language has little impact on negative interference on the target language (English in this case).",contrasting test_466,"Unlike language-specific adapters that can hinder transferability, shared adapters improve both within-language and cross-lingual performance with the extra capacity.",meta adapters still obtain better performance.,contrasting test_467,They observe that having more languages results in better zero-shot performance.,"several artifacts arise, as described by Dabre et al. (2020); Zhang et al. (2020); Aharoni et al. (2019); Arivazhagan et al. (2019), like offtarget translation and insufficient modeling capacity of the MNMT models.",contrasting test_468,"This might indicate that -at least during the adaptation -important information is captured in the encoder's adapter layer (in line with previous reports by Kudugunta et al., 2019) or that the decoder adaptation grows dependent on the encoder adapters, to the point where dropping the latter degrades the system.",further analysis would be needed to confirm either of these hypotheses.,contrasting test_469,"Not only is beam search usually more accurate than greedy search, but it also outputs a diverse set of decodings, enabling reranking approaches to further improve accuracy (Yee et al., 2019; Ng et al., 2019; Charniak and Johnson, 2005; Ge and Mooney, 2006).",it is challenging to optimize the performance of beam search for modern neural architectures.,contrasting test_470,FIXED-OURS is slower than Fairseq's implementation.,"while the two implementations achieve more similar BLEU on the development set, FIXED-OURS achieves higher BLEU on the test set (49.75 vs 49.57 on De-En and 39.19 vs 38.98 on Ru-En).",contrasting test_471,"The produced meaning representations can then potentially be used to improve downstream NLP applications (e.g., Issa et al., 2018;Song et al., 2019;Mihaylov and Frank, 2019), though the introduction of large pretrained language models has shown that explicit formal meaning representations might not be a necessary component to achieve high accuracy.","it is now known that these models lack reasoning capabilities, often simply exploiting statistical artifacts in the data sets, instead of actually understanding language (Niven and Kao, 2019;McCoy et al., 2019).",contrasting test_472,A possible advantage of this model is that it might handle longer sentences and documents better.,"it might be harder to tune (Popel and Bojar, 2018) 2 and its improved performance has mainly been shown for large data sets, as opposed to the generally smaller semantic parsing data sets (Section 3.3).",contrasting test_473,"For both methods, it results in a clear and significant improvement over the BERT-only baseline, 87.6 versus 88.1.",another common method of improving performance is adding linguistic features to the tokenlevel representations.,contrasting test_474,"This was done for efficiency and memory purposes, it did not make a difference in terms of F1-score.",for the Transformer model this improved F1-score by around 0.5.,contrasting test_475,Data efficiency can be improved by optimizing pre-training directly for future fine-tuning with few examples; this can be treated as a meta-learning problem.,"standard meta-learning techniques require many training tasks in order to generalize; unfortunately, finding a diverse set of such supervised tasks is usually difficult.",contrasting test_476,Bansal et al. (2019) proposed an approach that applies to diverse tasks to enable practical meta-learning models and evaluate on generalization to new tasks.,they rely on supervised task data from multiple tasks and suffer from meta-overfitting as we show in our empirical results.,contrasting test_477,"Owing to the warp layers, our training time per step and the GPU memory footprint is lower than LEOPARD (Bansal et al., 2019).",our training typically runs much longer as the model doesn't overfit unlike LEOPARD (see learning rate trajectory in main paper).,contrasting test_478,"As Wiki itself is a collaborative knowledge repository, editors are likely to attack others due to disputes on specific domain knowledge.",the users are the general public who post comments and tweets more casually for Yahoo and Twitter.,contrasting test_479,"Multilingual contextual embeddings have demonstrated state-of-the-art performance in zero-shot cross-lingual transfer learning, where multilingual BERT is fine-tuned on one source language and evaluated on a different target language.",published results for mBERT zero-shot accuracy vary as much as 17 points on the MLDoc classification task across four papers.,contrasting test_480,Many models inspired by BERT have since surpassed its performance.,"in contrast to the original BERT paper, many obtained better results by excluding the NSP task.",contrasting test_481,The decoupled biLSTM extended with ELMo inputs is able to outperform the transformer model initialised with RoBERTa pretraining.,"the best performance is achieved by using the transformer model with BART-large pretraining, with the decoupled model fine-tuned jointly on top of it (Lewis et al., 2019).",contrasting test_482,"The concept of Dialogue Act (DA) is universal across different task-oriented dialogue domains-the act of ""request"" carries the same speaker intention whether it is for restaurant reservation or flight booking.","dA taggers trained on one domain do not generalize well to other domains, which leaves us with the expensive need for a large amount of annotated data in the target domain.",contrasting test_483,It is often challenging and costly to obtain a large amount of in-domain dialogues with annotations.,"unlabeled dialogue corpora in target domain can easily be curated from past conversation logs or collected via crowd-sourcing (Byrne et al., 2019;Budzianowski et al., 2018) at a more reasonable cost.",contrasting test_484,"In prior work (Xie et al., 2019;Wei and Zou, 2019), unsupervised data augmentation methods including word replacement and backtranslation have been shown useful for short written text classification.","such augmentation methods are shown to be less effective (Shleifer, 2019) when used with pre-trained models.",contrasting test_485,"We find that for both tense and mood in the Indo-Aryan family, our model identifies required-agreement primarily for conjoined verbs, which mostly need to agree only if they share the same subject.",subsequent analysis revealed that in the treebanks nearly 50% of the agreeing verbs do not share the same subject but do agree by chance.,contrasting test_486,Professor Tanja Kallio and doctoral candidate Sami Tuomi consider the realisation of this goal entirely possible.,"""scientifically we are in the dark about the consequences of rewilding, and we worry about the general lack of critical thinking surrounding these often very expensive attempts at conservation.",contrasting test_487,"Such pairs show consistent improvement (+5 to +10), which suggests that the model learns to align the parallel knowledge from the source language to the target language.",we also must note that the effect is strongly dependent on the size of the overlapping sets.,contrasting test_488,"Students need good textbooks to study before they can pass an exam, and the same holds for a good machine reading model.","finding the information needed to answer a question, especially for questions in such a narrow domain as the subjects studied in high schools, usually requires a collection of specialized texts.",contrasting test_489,"However, little work has looked further up the pipeline and relied on the assumption that biases in data originate in human cognition.",this assumption motivates our work: an unsupervised approach to detecting implicit gender bias in text.,contrasting test_490,"Psychology studies often examine human perceptions through word associations (Greenwald et al., 1998).","the implicit nature of bias suggests that human annotations for bias detection may not be reliable, which motivates an unsupervised approach.",contrasting test_491,is likely addressed towards a woman and identify it as biased.,we only want the model to learn that references to appearance are indicative of gender if they occur in unsolicited contexts.,contrasting test_492,"For example, humans need the supervision of what is a noun before they do POS tagging, or what is a tiger in Wordnet before they classify an image of tiger in ImageNet.","for NLI, people are able to entail that a A man plays a piano contradicts b A man plays the clarinet for his family without any supervision from the NLI labels.",contrasting test_493,"Both depGCN and kumaGCN can correctly classify the sentiment of ""service"" as negative.","depGCN cannot recognize the positive sentiment of ""atmosphere"" while kumaGCN can.",contrasting test_494,"For the target ""atmosphere"", depGCN assigns the highest weight to the word ""terrible"", which is an irrelevant sentiment word to this target, leading to an incorrect prediction.","our model assigns the largest weight to the key sentiment word ""cozy"", classifying it correctly.",contrasting test_495,"Although collecting tags from users is timeconsuming and often suffers from coverage issues (Katakis et al., 2008), NLP techniques like those in Kar et al. (2018b) and Gorinski and Lapata (2018) can be employed to generate tags automatically from written narratives such as synopses.",existing supervised approaches suffer from two significant weaknesses.,contrasting test_496,August is selected to perform the rhapsody he's been composing at the same concert.,"wizard, who found out about August's performance by Arthur, interrupts the rehearsal and claims to be his father, and manages to pull August out of the school.",contrasting test_497,"Mozart would be an absolute imbecile compared to this little kid August Rush, and for those familiar with music, this aspect (the foundation, really) just kills the movie.It is impossible to play like Michael Hedges in your first few minutes with a guitar.",i just finished watching August Rush and i am in no way exaggerating when i say that it is by far the best movie i have ever seen.,contrasting test_498,"Im not sure who would actually enjoy this movie, maybe if you're 70, or under 12 but for everyone else I'd save your time.The acting itself wasn't bad, though the more interesting characters were played by Terrence Howard and Robin Williams, and they were both severely under-developed as you wanted to know more about them and less about this kid with the stupid smile all the time..","while the movie has a modern setting, it shares many plot elements with OLIVER TWIST, ending even better.It begins with a young couple of musicians that meets and has a one-night stand, and when she becomes pregnant her dad does everything to make her believe that the child died at birth, although he just put the child for adoption.A decade later the boy, Evan, lives in a orphanage and is mocked by the other kids because of his talents in music, that makes him like a savant with powerful skills.",contrasting test_499,"Although this film plays well to a broad audience, it is very mystical and based on simple, yet emotional themes that will play flat to some movie-goers.If you have strong parental feelings or enjoy movies centered on the power of human love and attraction, this story will move you like few films ever have.","if you are easily bored with themes that are lacking in danger and suspense or prefer gritty true-to-life movies, this one may come off as a disappointment.The screenplay seems written as a spiritual message intimating that there is an energy field that connects all of life, and music is one of the domains available to any who care to experience it.The plot is simple but deep in implication-an orphaned boy wants to reunite with his parents and feels that his inherited musical genius can somehow guarantee their return.",contrasting test_500,"Graph convolutional networks (GCN) is demonstrated to be an effective approach to model such contextual information between words in many NLP tasks (Marcheggiani and Titov, 2017;Huang and Carley, 2019;De Cao et al., 2019); thus we want to determine whether this approach can also help CCG supertagging.",we cannot directly apply conventional GCN models to CCG supertagging because in most of the previous studies the GCN models are built over the edges in the dependency tree of an input sentence.,contrasting test_501,Having more training data can help reduce overfitting and improve model robustness.,"preparing a large amount of annotated data is usually costly, labor intensive and time-consuming.",contrasting test_502,"In this work, we will focus on denoising recurrent neural network autoencoders (Vincent et al., 2010;Shen et al., 2020; see Appendix A).",any advancement in this research direction will directly benefit our framework.,contrasting test_503,"This observation may seem counter to the widely seen success of finetuning across other NLP scenarios, in particular with pretrained transformer models like BERT (Devlin et al., 2019).",finetuning does not always lead to better performance.,contrasting test_504,"Recent work starts to use gradient (Michel et al., 2019;Ebrahimi et al., 2017) to guide the search for universal trigger (Wallace et al., 2019) that are applicable to arbitrary sentences to fool the learner, though the reported attack success rate is rather low or they suffer from inefficiency when applied to other NLP tasks.","our proposed T3 framework is able to effectively generate syntactically correct adversarial text, achieving high targeted attack success rates across different models on multiple tasks.",contrasting test_505,"There are also systems (Luo et al., 2018; Kumar et al., 2019) that incorporate sense definitions into language models and achieve state-of-the-art performance.","most of the systems are implemented in a supervised manner using a widely exploited sense-annotated corpus, SemCor (Miller et al., 1994), and merging knowledge from the sense inventory as a supplement.",contrasting test_506,"By using regex-based extractors and a list of comprehensive dictionaries that capture crucial domain vocabularies, LUSTRE can generate rules that achieve SoTA results.","for more complex and realistic scenarios, dictionaries may not be available and regex-based extractors alone are not expressive enough.",contrasting test_507,EMR and LwF can achieve competitive performance at the beginning.,the gap between the two baselines and our method KCN becomes wider as more new classes arrive.,contrasting test_508,"Very few models exist that can predict either open vocab (Rashkin et al., 2018), or variable size output .",no existing task has both open vocabulary and variable-size low specificity-placing OPENPI in a novel space.,contrasting test_509,"The SCoNE dataset (Long et al., 2016) contains paragraphs describing a changing world state in three synthetic, deterministic domains.","approaches developed using synthetic data often fail to handle the inherent complexity in language when applied to organic, real-world data (Hermann et al., 2015;Winograd, 1972).",contrasting test_510,"For informativeness, we notice that all models perform well on the seen domains.","on unseen domains, the Naive approach fares poorly.",contrasting test_511,"Very similar to our task, Kang et al. (2019) developed language models informed by discourse relations on the bridging task; given the first and last sentences, predicting the intermediate sentences (bidirectional flow).",they did not explicitly predict content words given context nor use them as a self-supervision signal in training.,contrasting test_512,"As an alternative metric of attention explainablity, (Jain and Wallace, 2019) considers the relationship between attention weights and gradient-based feature importance score of each word.","prior research suggests using word as a unit of importance feature is rather artificial, as word is contextualized by, and interacts with other words: (Wiegreffe and Pinter, 2019) observes such limitation, and Shapley (Chen et al., 2018) measures interaction between features for capturing dependency of arbitrary subsets.",contrasting test_513,"The GNN-based models are particularly strong in this setting (see Appendix C), and this suggests that transferring knowledge about the relevancy of facts from structured to unstructured models may be a promising direction.","at the same time, the improvements for generalization were less substantial, indicating that some reasoning capacities are difficult to distill in this manner.",contrasting test_514,"This is not surprising as the generalization ability is a known issue in modern NLP models and is an ongoing research topic (Bahdanau et al., 2019; Andreas, 2019).",the generalization is in parallel with our contribution that is to improve the reasoning ability of NLP models.,contrasting test_515,Predictive methods such as probing are flexible: Any task with data can be assessed.,"they only track predictability of pre-defined categories, limiting their descriptive power.",contrasting test_516,"Since nPMI is information-theoretic and chance-corrected, it is a reliable indicator of the degree of information about gold labels contained in a set of predicted clusters.","it is relatively insensitive to cluster granularity (e.g., the total number of predicted categories, or whether a single gold category is split into many different predicted clusters), which is better understood through our other metrics.",contrasting test_517,"On one hand, this reflects surface patterns: primary core arguments are usually close to the verb, with ARG0 on the left and ARG1 on the right; trailing arguments and modifiers tend to be prepositional phrases or subordinate clauses; and modals and negation are identified by lexical and positional cues.","this also reflects error patterns in state-of-the-art systems, where label errors can sometimes be traced to ontological choices in PropBank, which distinguish between arguments and adjuncts that have very similar meaning Kingsbury et al., 2002).",contrasting test_518,"Most of the existing work has adopted static sentiment lexicons as linguistic resource (Qian et al., 2017; Chen et al., 2019), and equipped each word with a fixed sentiment polarity across different contexts.",the same word may play different sentiment roles in different contexts due to the variety of part-ofspeech tags and word senses.,contrasting test_519,We observe that our proposed MT-H-LSTM-CRF consistently outperforms the baseline models.,"it performs slightly worse on RR-submission than on RR-passage, plausibly because there is no context information (i.e., background knowledge from original submissions) shared between different passage pairs.",contrasting test_520,"Sentence [1] is non-hyperbolic because the fact that ""her one step equals my two steps"" is not anything that would be surprising to anyone.","if one changes the number from ""two"" to ""100"", then the resulting sentence becomes hyperbolic because in reality it is not possible that one person's step would equal another person's 100 steps.",contrasting test_521,Their proposal is similar to ours; they exclude attention weights that do not affect the output owing to the application of transformation f and input x in the analysis.,our proposal differs from theirs in some aspects.,contrasting test_522,"The target word in (1 a) is associated 3 with the gold gloss (1 b) from WordNet (Fellbaum, 1998), the most used sense inventory in WSD.",generationary arguably provides a better gloss (1 c).,contrasting test_523,"It is not surprising that machine learning methods can easily surpass human performance if sufficient data is available (Wang et al., 2018).",data acquisition is a challenging task for some special domains.,contrasting test_524,The core idea of our method is finding a different entity for intervening on an entity in the observational example.,"finding a new entity set in a specific domain needs human efforts to collect entities, which has no difference from annotating more data.",contrasting test_525,"Second, the training and test data in these benchmarks are sampled from the same corpus, and therefore the training data usually have high mention coverage on the test data, i.e., a large proportion of mentions in the test set have been observed in the training set.","it is obvious that this high coverage is inconsistent with the primary goal of NER models, which is expected to identify unseen mentions from new data by capturing the generalization knowledge about names and contexts.",contrasting test_526,This is because they only annotate named mentions but ignore nominal and pronominal mentions.,"the context of named and nominal/pronominal mentions is generally identical, and therefore the models will be unable to distinguish between them once name regularity is removed.",contrasting test_527,"Temporal KGs often exhibit multiple simultaneous non-Euclidean structures, such as hierarchical and cyclic structures.","existing embedding approaches for temporal KGs typically learn entity representations and their dynamic evolution in the Euclidean space, which might not capture such intrinsic structures very well.",contrasting test_528,"More recently, generalized manifolds of constant curvature to a product manifold combining hyperbolic, spherical, and Euclidean components.",these methods consider graph data as static models and lack the ability to capture temporally evolving dynamics.,contrasting test_529,"Building upon recent NLI systems, our approach leverages representations from unsupervised pretraining, and finetunes a multiclass classifier over the BERT model (Devlin et al., 2019).",we first consider other models for related tasks.,contrasting test_530,Such results are problematic for entailment (since it is defined to depend on the truth of the premise).,our problem is primarily about the meaning of answers.,contrasting test_531,"The approach achieves consistently better performance compared to the first two rows on the inter-domain structure prediction task (For both, original Parseval and RST-Parseval), as we have previously shown in Huber and Carenini (2019).","only considering two out of three nuclearity classes (N-S and S-N), the system performs rather poorly on the nuclearity classification task.",contrasting test_532,"Our proposed new method (BERT-BASE-LWAN) that employs LWAN on top of BERT-BASE has the best results among all methods on EURLEX57K and AMAZON13K, when all and frequent labels are considered.","in both datasets, the results are comparable to BERT-BASE, indicating that the multi-head attention mechanism of BERT can effectively handle the large number of labels.",contrasting test_533,"To identify whether an example is biased, they employ a shallow model f b , a simple model trained to directly compute p(y|b(x)), where the features b(x) are hand-crafted based on the task-specific knowledge of the biases.","obtaining the prior information to design b(x) requires a dataset-specific analysis (Sharma et al., 2018).",contrasting test_534,"Training a shallow model The analysis suggests that we can obtain a substitute f b by taking a checkpoint of the main model early in the training, i.e., when the model has only seen a small portion of the training data.","we observe that the resulting model makes predictions with rather low confidence, i.e., assigns a low probability to the predicted label.",contrasting test_535,"This indicates that unregularized training optimizes faster on certain examples, possibly due to the presence of biases.",self-debiased training maintains relatively less variability of losses throughout the training.,contrasting test_536,"Closely related to our work, Singh et al. (2019) showed that replacing segments of the training data with their translation during fine-tuning is helpful.","they attribute this behavior to a data augmentation effect, which we believe should be reconsidered given the new evidence we provide.",contrasting test_537,"For example, by framing the immigration issue using the morality frame or using the security frame, the reader is primed to accept the liberal or conservative perspectives, respectively.","as shown in Example 1, in some cases this analysis is too coarse grained, as both articles frame the issue using the economic frame, suggesting that a finer grained analysis is needed to capture the differences in perspective.",contrasting test_538,"In Example 1, both texts use the Economic frame using same unigram indicator ('wage').","other words in the text can help identify the nuanced talking points (e.g., 'minimum wage' in case of left and 'stagnant wages' in case of right).",contrasting test_539,"Availability of large-scale datasets has enabled the use of statistical machine learning in vision and language understanding, and has lead to significant advances.","the commonly used evaluation criterion is the performance of models on test-samples drawn from the same distribution as the training dataset, which cannot be a measure of generalization.",contrasting test_540,The goal of OOD generalization is to mitigate negative bias while learning to perform the task.,"existing methods such as LMH (Clark et al., 2019) try to remove all biases between question-answer pairs, by penalizing examples that can be answered without looking at the image; we believe this to be counterproductive.",contrasting test_541,"Notice that both mutations do not significantly change the input, most of the pixels in the image and words in the question are unchanged, and the type of reasoning required to answer the question is unchanged.",the mutation significantly changes the answer.,contrasting test_542,"As expected, the batch-aware strategies, DAL and Core-Set, which were designed to increase diversity, are characterized by the most diverse batches, with DAL achieving the highest diversity values, demonstrating the success of using mini-queries (Gissin and Shalev-Shwartz, 2019) to reduce redundancy of the selected examples.","the other strategies tend to select less diverse batches, i.e., they are prone to choose redundant examples, especially in the imbalanced-practical scenario.",contrasting test_543,This roughly translates to increasing the throughput of the training process.,"when performing inference on a single data point, the latency of making predictions seems to dominate the runtime (Jouppi et al., 2017).",contrasting test_544,"Recent work has shown that contextualised embeddings pre-trained on large written corpora can be fine-tuned on smaller spoken language corpora to learn structures of spoken language (Tran et al., 2019).","for NLP tasks, fillers and all disfluencies are typically removed in pre-processing, as NLP models achieve highest accuracy on syntactically correct utterances.",contrasting test_545,"An assumption one could make based on the work by Radford et al. (2019), is that with this model, the results for any further downstream task would be improved by the presence of fillers.","we observe that to predict the persuasiveness of the speaker (using the high level attribute of persuasiveness annotated in the dataset (Park et al., 2014)), following the same procedure as outlined in subsubsection 2.1.2, that fillers, in fact, are not a discriminative feature.",contrasting test_546,"Stehwien and Vu (2017) and Stehwien et al. (2018) (henceforth, SVS18) showed that neural methods can perform comparably to traditional methods using a relatively small amount of speech context— just a single word on either side of the target word.","since pitch accents are deviations from a speaker’s average pitch, intensity, and duration, we hypothesize that, as in some non-neural models (e.g. Levow 2005; Rosenberg and Hirschberg 2009), a wider input context will allow the model to better determine the speaker’s baseline for these features an",contrasting test_547,"Recent advances in deep learning present a promising prospect in multimodal stock forecasting by analyzing online news , and social media (Guo et al., 2018) to learn latent patterns affecting stock prices (Jiang, 2020).","the challenging aspect in stock forecasting is that most existing work treats stock movements to be independent of each other, contrary to true market function (Diebold and Yılmaz, 2014).",contrasting test_548,"The news interview setting revolves around sets of questions and answers-naively, one may assume the interviewer to be the sole questioner.","media dialog has steadily deviated from this rigid structure, tending toward the broadly conversational (Fairclough, 1988).",contrasting test_549,"Natural language inference (NLI) data has proven useful in benchmarking and, especially, as pretraining data for tasks requiring language understanding.","the crowdsourcing protocol that was used to collect this data has known issues and was not explicitly optimized for either of these purposes, so it is likely far from ideal.",contrasting test_550,"Longer texts offer the potential for discourse-level inferences, the addition of which should yield a dataset that is more difficult, more diverse, and less likely to contain trivial artifacts.",one might expect that asking annotators to read full paragraphs should increase the time required to create a single example; time which could potentially be better spent creating more examples.,contrasting test_551,Giving annotators difficult and varying constraints could encourage creativity and prevent annotators from falling into patterns in their writing that lead to easier or more repetitive data.,"as with the use of longer contexts in PARAGRAPH, this protocol risks substantially slowing the annotation process.",contrasting test_552,Our chief results on transfer learning are conclusively negative: All four interventions yield substantially worse transfer performance than our base MNLI data collection protocol.,we also observe promising signs that all four of our interventions help to reduce the prevalence of artifacts in the generated hypotheses that reveal the label.,contrasting test_553,We bring up textual entailment as a unified solver for such NLP problems.,current research of textual entailment has not spilled much ink on the following questions: (i) How well does a pretrained textual entailment system generalize across domains with only a handful of domainspecific examples?,contrasting test_554,"Thus, various stress-testing datasets have been proposed that probe NLI models for simple lexical inferences (Glockner et al., 2018), quantifiers (Geiger et al., 2018), numerical reasoning, antonymy and negation (Naik et al., 2018).","despite the heavy usage of conjunctions in English, there is no specific NLI dataset that tests their understanding in detail.",contrasting test_555,"We presented some initial solutions via adversarial training and a predicate-aware RoBERTa model, and achieved some reasonable performance gains on CONJNLI.","we also show limitations of our proposed methods, thereby encouraging future work on CONJNLI for better understanding of conjunctive semantics.",contrasting test_556,"This method is later extended to a hierarchical setting with a pre-defined hierarchy (Meng et al., 2019); ConWea (Mekala and Shang, 2020) leverages contextualized representation techniques to provide contextualized weak supervision for text classification.",all these techniques consider only the text data and don't leverage metadata information for classification.,contrasting test_557,Note that the same hypothesis can also be made at the paragraph level.,"a major limitation of this approach is that paragraph sizes vary widely, ranging from a single word to a considerably huge block of text.",contrasting test_558,"We use bidirectional contextual representation (Devlin et al., 2018) for encoding article text.","contrary to document representation using BERT (Adhikari et al., 2019), which is not adequate for large text documents, we first segment articles organically based on sections.",contrasting test_559,"Finally, there has been some work on directly training a model to extract entities and associated negation constraints (Bhatia et al., 2019).",these works usually assume the availability of good quality annotated negated entities.,contrasting test_560,This behavior unique to S C+R is safe for the noisy data filtering task since it can successfully detect lower-quality pairs with high precision.,"improperly underestimating some acceptable pairs (i.e., low recall) is one downside of S C+R , and we discuss its influences in Section 6.3.",contrasting test_561,"TXtract (Karamanolakis et al., 2020) incorporated the categorical structure into the value tagging system.",these methods suffer from irrelevant articles and is not able to filter out noisy answers.,contrasting test_562,"Similar to prior work (Lee et al., 2017), our training objective is to maximize the probability of the correct antecedent (cluster) for each mention span.","rather than considering all correct antecedents, we are only interested in the cluster for the most recent one.",contrasting test_563,These plots show that models have relatively modest memory usage during inference.,"their usage grows in training, due to gradients and optimizer parameters.",contrasting test_564,"This would ""correct"" the training objective to match prior work.",this did not have a noticeable effect on performance.,contrasting test_565,"Likewise, we were able to train a competitive model for which only the SpanBERT encoder from Joshi et al. (2019) was retained and the span scorer and pairwise scorer were randomly initialized.",we opted not to use that for the full experiments because training was more expensive in time.,contrasting test_566,"Neural generation models based on different strategies like softtemplate (Wiseman et al., 2018; Ye et al., 2020), copy-mechanism (See et al., 2017), content planning (Reed et al., 2018; Moryossef et al., 2019), and structure awareness Colin and Gardent, 2019) have achieved impressive results.","existing studies are primarily focused on fully supervised setting requiring substantial labeled annotated data for each subtask, which restricts their adoption in real-world applications.",contrasting test_567,"The work closest to our concept is Switch-GPT-2 (Chen et al., 2020b), which fits the pre-trained GPT-2 model as the decoder part to perform table-to-text generation.","their knowledge encoder is still trained from scratch, which compromises the performance.",contrasting test_568,"GPT (Radford, 2018) and GPT-2 (Radford et al., 2019) use a leftto-right Transformer decoder to generate a text sequence token-by-token, which lacks an encoder to condition generation on context.","MASS (Song et al., 2019) and BART (Lewis et al., 2019) both employ a Transformer-based encoderdecoder framework, with a bidirectional encoder over corrupted (masked) text and a left-to-right decoder reconstructing the original text.",contrasting test_569,"However, these autoencoding methods are not applicable to text generation where bidirectional contexts are not available.","an autoregressive model, such as GPT (Radford, 2018; Radford et al., 2019), is only trained to encode unidirectional context (either forward or backward).",contrasting test_570,A difference between UniLMs and PALM is that UniLMs are not fully autoregressive in the pre-training process.,pALM reduces the mismatch between pre-training and context-conditioned generation tasks by forcing the decoder to predict the continuation of text input on an unlabeled corpus.,contrasting test_571,"For instance in Figure 1, players might prefer the command ""move rug"" over ""knock on door"" since the door is nailed shut.","even the state-of-the-art game-playing agents do not incorporate such priors, and instead rely on rule-based heuristics (Hausknecht et al., 2019a) or handicaps provided by the learning environment (Hausknecht et al., 2019a;Ammanabrolu and Hausknecht, 2020) to circumvent these issues.",contrasting test_572,"In a slightly different setting, Urbanek et al. (2019) trained BERT (Devlin et al., 2018) to generate contextually relevant dialogue utterances and actions in fantasy settings.",these approaches are game-specific and do not use any reinforcement learning to optimize gameplay.,contrasting test_573,"When k is small, CALM (n-gram) benefits from its strong action assumption of one verb plus one object.",this assumption also restricts CALM (n-gram) from generating more complex actions (e.g. ‘open case with key’) that CALM (GPT2) can produce.,contrasting test_574,It is really the complex actions captured when k > 10 that makes GPT-2 much better than ngram.,"though k = 20, 30, 40 achieve similar overall performance, they achieve different results for different games.",contrasting test_575,"It is interesting that CALM (w/ Jericho) is significantly better than CALM (GPT-2) on the games of Temple and Deephome (non-trivial scores achieved), which are not the games with ClubFloyd scripts added.","games like 905 and moonlit have scripts added, but do not get improved.",contrasting test_576,Natural Language Generation (NLG) is a challenging problem in Natural Language Processing (NLP)-the complex nature of NLG tasks arise particularly in the output space.,"to text classification or regression problems with finite output space, generation could be seen as a combinatorial optimization problem, where we often have exponentially many options |V | (here |V | is the size of the vocabulary and is the sentence length).",contrasting test_577,It is possible to stop training the decomposition model based on downstream QA accuracy.,"training a QA model on each decom-position model checkpoint (1) is computationally expensive and (2) ties decompositions to a specific, downstream QA model.",contrasting test_578,The similarity between BERT sentence embeddings can be reduced to the similarity between BERT context embeddings h T c h c 2 .,"as shown in Equation 1, the pretraining of BERT does not explicitly involve the computation of h T c h c .",contrasting test_579,"Note that BERT sentence embeddings are produced by averaging the context embeddings, which is a convexitypreserving operation.",the holes violate the convexity of the embedding space.,contrasting test_580,We argue that NATSV can help eliminate anisotropy but it may also discard some useful information contained in the nulled vectors.,our method directly learns an invertible mapping to isotropic latent space without discarding any information.,contrasting test_581,The transferred model both increased the quality and diversity of the generation.,the transferred model exhibits narrower vocabulary usage.,contrasting test_582,"Statistic-based automatic metrics, such as BLEU (Papineni et al., 2002), mostly rely on the degree of word overlap between a dialogue response and its corresponding gold response.","due to the ignorance of the underlying semantic of a response, they are biased and correlate poorly with human judgements in terms of response coherence (Liu et al., 2016).",contrasting test_583,"For example, BLEU computes the geometric average of the n-gram precisions.","they can not cope with the one-to-many problem and have weak correlations with human judgements (Liu et al., 2016).",contrasting test_584,"ADEM proposed by Lowe et al. (2017) achieves higher correlations with human judgements than the statistic-based metrics, which is trained with human-annotated data in a supervised manner.",it is time-consuming and expensive to obtain large amounts of annotated data.,contrasting test_585,"From the example in the first row, we can see that the score given by our metric is closer to the human score than the other two baseline metrics.","in the second-row example, our metric performs poorly.",contrasting test_586,"In this hard case, the topics of the model response are relevant to the dialogue context so that both our GRADE and BERT-RUBER, as learning-based metrics, deem that the response greatly matches the context.","the truth is that the model response is more likely a response for the previous utterance U1 rather than U2, which is hard for metrics to recognize.",contrasting test_587,"Such tasks include probing syntax (Hewitt and Manning, 2019; Lin et al., 2019; Tenney et al., 2019a), semantics (Yaghoobzadeh et al., 2019), discourse features (Chen et al., 2019; Liu et al., 2019; Tenney et al., 2019b), and commonsense knowledge (Petroni et al., 2019; Poerner et al., 2019).",appropriate criteria for selecting a good probe is under debate.,contrasting test_588,"BLEURT (Sellam et al., 2020) applies fine tuning of BERT, including training on prior human judgements.",our work exploits parallel bitext and doesn't require training on human judgements.,contrasting test_589,We find that a copy of the input is almost as probable as beam search output for the Prism model.,the model trained on ParaBank 2 prefers its own beam search output to a copy of the input.,contrasting test_590,"We find that the probability of sys as estimated by an LM, as well as and the cosine distance between LASER embeddings of sys and ref, both have decent correlation with human judgments and are complementary.",cosine distance between LASER embeddings of sys and src have only weak correlation.,contrasting test_591,"Its creation is a manifestation of creativity, and, as such, hard to automate.","since the development of creative machines is a crucial step towards real artificial intelligence, automatic poem generation is an important task at the intersection of computational creativity and natural language generation, and earliest attempts date back several decades; see Goncalo Oliveira (2017) for an overview.",contrasting test_592,"Since those differ significantly in style from the poems in KnownTopicPoems and Unknown-TopicPoems, we do not train our language model directly on them.","we make use of the fact that sonnets follow a known rhyming scheme, and leverage them to train a neural model to produce rhymes, which will be explained in detail in Subsection 3.2.",contrasting test_593,"In particular, the generated poems seem to be more fluent and coherent than the alternatives.","they do not relate to any specific topic, which probably causes the drop in quality for poeticness, where this model always performs worse than NeuralPoet.",contrasting test_594,The increase of K continues to bring benefits until K = 4.,performance begins to drop when K > 3.,contrasting test_595,"Neural Graph Encoding Graph Attention Networks (GAT) (Velickovic et al., 2018) incorporates attention mechanism in feature aggregation, RGCN (Schlichtkrull et al., 2018) proposes relational message passing which makes it applicable to multi-relational graphs.",they only perform single-hop message passing and cannot be interpreted at path level.,contrasting test_596,"RGCNs (Schlichtkrull et al., 2018) generalize GCNs by performing relationspecific aggregation, making it applicable to multirelational graphs.",these models do not distinguish the importance of different neighbors or relation types and thus cannot provide explicit relational paths for model behavior interpretation.,contrasting test_597,"This problem is clearly related a continual learning (CL) (Chen and Liu, 2018; Parisi et al., 2019; Li and Hoiem, 2017; Wu et al., 2018; Schwarz et al., 2018; Hu et al., 2019; Ahn et al., 2019), which also aims to learn a sequence of tasks incrementally.","the main objective of the current CL techniques is to solve the catastrophic forgetting (CF) problem (McCloskey and Cohen, 1989).",contrasting test_598,"That is, in learning each new task, the network parameters need to be modified in order to learn the new task.",this modification can result in accuracy degradation for the previously learned tasks.,contrasting test_599,"For sentiment classification, recent deep learning models have been shown to outperform traditional methods (Kim, 2014;Devlin et al., 2018;Shen et al., 2018;Qin et al., 2020).",these models don't retain or transfer the knowledge to new tasks.,contrasting test_600,MTL is often considered the upper bound of continual learning because it trains all the tasks together.,"its loss is the sum of the losses of all tasks, which does not mean it optimizes for every individual task.",contrasting test_601,Automated radiology report generation has the potential to reduce the time clinicians spend manually reviewing radiographs and streamline clinical care.,"past work has shown that typical abstractive methods tend to produce fluent, but clinically incorrect radiology reports.",contrasting test_602,"In this work we focused on developing abstractive techniques as was done by past work on the MIMIC-CXR dataset (Liu et al., 2019; Boag et al., 2019).",in the future we intend to combine the abstractive methods developed in this work with retrieval methods to further improve upon our framework.,contrasting test_603,"Sentence fusion has the lowest TER, indicating that obtaining the fused targets requires only a limited number of local edits.","these edits require modeling the discourse relation between the two input sentences, since a common edit type is predicting the correct discourse connective (Geva et al., 2019).",contrasting test_604,Meng and Rumshisky (2018) propose a global context layer (GCL) to store/read the solved TLINK history upon a pre-trained pair-wise classifier.,they find slow converge when training the GCL and pair-wise classifier simultaneously.,contrasting test_605,"Meanwhile, a two-way deliberation decoder (Xia et al., 2017) was used for response generation.",the relationship between the dialogue history and the last utterance is not well studied.,contrasting test_606,"As a result, capturing the incongruity between modalities is significant for multi-modal sarcasm detection.","the existing models for multi-modal sarcasm detection either concatenate the features from multi modalities (Schifanella et al., 2016) or fuse the information from different modalities in a designed manner (Cai et al., 2019).",contrasting test_607,"Thus, an effective sarcasm detector is beneficial to applications like sentiment analysis, opinion mining (Pang and Lee, 2007), and other tasks that require people's real sentiment.","the figurative nature of sarcasm makes it a challenging task (Liu, 2010).",contrasting test_608,"For the store owner, the task of correctly identifying the buying-intent utterances is paramount.","the number of utterances related to searching for products is expected to be significantly higher, thus biasing the classifier toward this intent.",contrasting test_609,"Other approaches for data balancing can include weak-labeling of available unlabeled data (Ratner et al., 2020), or even active learning (Settles, 2009).",both of these approaches require additional domain data which is not always available.,contrasting test_610,We focused our evaluation on the Semantic Utterance Classification (SUC) domain which is characterized by highly imbalanced data.,it is desirable to validate the applicability of our general balancing approach on other textual domains.,contrasting test_611,"Knowledge+BERT turns out to be the strongest baseline, outperforming the other three baselines, which also shows the importance of leveraging external knowledge for the OAC2 task.",our model achieves superior performance over Knowledge+BERT which indicates leveraging domain-specific knowledge indeed helps.,contrasting test_612,"This is expected as these tweets include less standard words, such as insults.","except for perhaps emotion detection and offensive language identification, the difference is not significant, considering that the original RoBERTa tokenizer was not trained on Twitter text.",contrasting test_613,"In normal attention, all n-grams are weighted globally and short n-grams may dominate the attention because they occur much more frequently than long ones and are intensively updated.",there are cases that long n-grams can play an important role in parsing when they carry useful context and boundary information.,contrasting test_614,"These results could be explained by that frequent short n-grams dominate the general attentions so that the long ones containing more contextual information fail to function well in filling the missing information in the span representation, and thus harm the understanding of long spans, which results in inferior results in complete match score.","the categorical span attention is able to weight n-grams in different length separately, so that the attentions are not dominated by high-frequency short n-grams and thus reasonable weights can be assigned to long n-grams.",contrasting test_615,This is not surprising because short n-grams occur more frequently and are thus updated more times than long ones.,"the models with CATSA show a different weight distribution (the blue bars) among n-grams with different lengths, which indicates that the CATSA module could balance the weights distribution and thus enable the model to learn from infrequent long n-grams.",contrasting test_616,"Since the distances between the boundary positions of the wrongly predicted spans (highlighted in red) are relatively long, the baseline system, which simply represents the span as subtraction of the hidden vectors at the boundary positions, may fail to capture the important context information within the text span.",the span representations used in our model are enhanced by weighted n-gram information and thus contain more context information.,contrasting test_617,"Because training with soft targets provides smoother output distribution, T/S learning could outperform the single model training (Li et al., 2014;Hinton et al., 2015;Meng et al., 2018).",does a teacher always outperform a student?,contrasting test_618,All models benefit from the increasing of D as expected.,it is clear that Recurrence Online is the best performing model when D is small.,contrasting test_619,"We observe that the use of Static Rebalancing (Equation 6) instead, which is an extreme version of AISLe, is better than not resampling at all.",it is unable to reach the performance of AISLe on coverage metrics.,contrasting test_620,"Kreutzer and Sokolov (2018) proposed to jointly learn to segment and translate by using hierarchical RNN (Graves, 2016), but the method is not model-agnostic and slow due to the increased sequence length of characterlevel inputs.",our method is model-agnostic and operates on the word-level.,contrasting test_621,"Kudo (2018) also report scores using n-best decoding, which averages scores from n-best segmentation results.",n-best decoding is n-times time consuming compared to the standard decoding method.,contrasting test_622,"Starting from machine translation, it has been shown that subword regularization can improve the robustness of NLP models in various tasks (Kim, 2019;Provilkov et al., 2019;Drexler and Glass, 2019;Müller et al., 2019).","subword regularization relies on the unigram language models to sample candidates, where the language models are optimized based on the corpus-level statistics from training data with no regard to the translation task objective.",contrasting test_623,"SurfCon (Wang et al., 2019b) discovered synonyms on privacy-aware clinical data by utilizing the surface form information and the global context information.",they suffer from either low precision or low recall.,contrasting test_624,"In this example, the word victim in the first English sentence is identified by our tagger as a human entity.","its French translation victime is feminine by definition, and cannot be assigned another gender regardless of the context, causing a false positive result.",contrasting test_625,"Furthermore, they explain IBT as a way to better approximate the true posterior distribution with the target-to-source model.",it is unclear how their heuristic objective relates to the ideal objective of maximizing the model's marginal likelihood of the target language monolingual data.,contrasting test_626,"Deep neural models have demonstrated promising results in text classification tasks (Kim, 2014;Zhang et al., 2015;Howard and Ruder, 2018), owing to their strong expressive power and less requirement for feature engineering.","the deeper and more complex the neural model, the more it is essential for them to be trained on substantial amount of training data.",contrasting test_627,"Several natural language processing methods, including deep neural network (DNN) models, have been applied to address this problem.","these methods were trained with hard-labeled data, which tend to become over-confident, leading to degradation of the model reliability.",contrasting test_628,"In the training step, the model is trained to maximize the output probability of the correct class.","some studies reported that the deep learning classifier trained with hard-labeled data (1 for correct class, 0 for else) tends to become over-confident (Nixon et al., 2019;Thulasidasan et al., 2019).",contrasting test_629,These deep learning methods can effectively generate abstractive document summaries by directly optimizing pre-defined goals.,the meeting summarization task inherently bears a number of challenges that make it more difficult for end-to-end training than document summarization.,contrasting test_630,"As shown in Figure 1, the medical report generation system should generate correct and concise reports for the input images.",data imbalance may reduce the quality of automatically generated reports.,contrasting test_631,"To improve the clinical correctness of the generated reports, Liu et al. (2019a) and Irvin et al. (2019) adopted clinically coherent rewards for RL with CheXpert Labeler (Irvin et al., 2019), a rule-based finding mention annotator","in the medical domain, no such annotator is available in most cases other than English chest X-ray reports.",contrasting test_632,"The input data, which comprise a set of finding labels, can be augmented easily by adding or removing a finding label automatically.",the augmentation cost is higher for the target reports than the input data because the target reports are written in natural language.,contrasting test_633,"In addition to (c) above, we apply the modification process to the finding labels predicted by the image diagnosis module.",it is too expensive to evaluate the model in this condition because the cost of radiologist services is too high.,contrasting test_634,"For comparison with the previous image captioning approaches, we used BLEU-1, BLEU-2, BLEU-3, and BLEU-4 metrics calculated by the nlg-eval 10 library.","word-overlap based metrics, such as BLEU, fail to assume the factual correctness of generated reports.",contrasting test_635,"In ERNIE, entity embeddings are learned by TransE (Bordes et al., 2013), which is a popular transitionbased method for knowledge representation learning (KRL).","transE cannot deal with the modeling of complex relations , such as 1-to-n, n-to-1 and n-to-n relations.",contrasting test_636,"An intuitive way for vanilla GCN to exploit these labels is to encode different types of dependency relation with different convolutional filters, which is similar to RGCN (Kipf and Welling, 2017).","rGCN suffers from over-parameterization, where the number of parameters grows rapidly with the number of relations.",contrasting test_637,We considered each sentence as a single claim to keep our experimental setting clean and avoid noise from an automatic claim extractor.,some generations contain multiple claims that could be independently assessed.,contrasting test_638,"By thorough error analysis, we realize that for the order h-t-r (t-h-r follows the same logic), the model has to predict all t with regard to h in the second time step, without constraints from the r, and this makes every possible entity to be a prediction candidate.","the model is unable to eliminate no-relation entity pairs at the third time step, thus the model is prone to feed entity pairs to the classification layer with an low odds (low recall) but high confidence (high precision).",contrasting test_639,"Over the past two decades, significant progress has been made in the development of word embedding techniques (Lund and Burgess, 1996; Bengio et al., 2003; Bullinaria and Levy, 2007; Mikolov et al., 2013b; Pennington et al., 2014).",existing word embedding methods do not handle numerals adequately and cannot directly encode the numeracy and magnitude of a numeral Naik et al. (2019).,contrasting test_640,"These studies conclude that when certain nouns are dropped from the dominant language modality, multimodal models are capable of properly using the semantics provided by the image.","unlike this work, their explorations are limited to nouns and not expanded to other types of words.",contrasting test_641,They demonstrate high compression rate with little loss of performance.,they compress only the input embedding and not the softmax layer for language modeling and machine translation.,contrasting test_642,"Specifically, they regard each segmentation criterion as a single task under the framework of multi-task learning, where a shared layer is used to extract the criteriainvariant features, and a private layer is used to extract the criteria-specific features.",it is unnecessary to use a specific private layer for each criterion.,contrasting test_643,"As shown in several previous research (Pang et al., 2016;Yang et al., 2016;Mitra et al., 2017;Xiong et al., 2017;Devlin et al., 2018), interaction-focused models usually achieve better performances for text pair tasks.",it is difficult to serve these types of models for applications involving large inference sets in practice.,contrasting test_644,"However, it is difficult to serve these types of models for applications involving large inference sets in practice.","text embeddings from dual encoder models can be learned independently and thus pre-computed, leading to faster inference efficiency but at the cost of reduced quality.",contrasting test_645,"Recently the PreTTR model (MacAvaney et al., 2020) aimed to reduce the query-time latency of deep transformer networks by pre-computing part of the document term representations.","their model still required modeling the full document/query input length in the head, thus limiting inference speedup.",contrasting test_646,One of the most effective ways to reduce the running time is to reduce the input sequence length.,"as Table 4 reveals, blindly truncating the input to a BERT model will lead to a quick performance drop.",contrasting test_647,"When the head is transformerbased, the two-stage training plays an important role: the AUC ROC improves from 0.891 to 0.930.",the gain introduced by using two-stage training is less significant in other approaches such as DE-FFNN and DIPAIRFFNN.,contrasting test_648,"Recall that, in our framework, each encoder outputs its first few token embeddings as the input to the head, and we end to end to train the model to force the encoder to push the information of the input text into those outputted embeddings.",it is unclear to us what those outputted embeddings actually learn.,contrasting test_649,We formalize word reordering as a combinatorial optimization problem to find the permutation with the highest probability estimated by a POS-based language model.,it is computationally difficult to obtain the optimal word order.,contrasting test_650,"With some classifiers, we reached the same F1-score as when training on the original dataset, which is 20x larger.",performance varied markedly between classifiers.,contrasting test_651,"This demonstrates that GPT-2 significantly increased the vocabulary range of the training set, specifically with offensive words likely to be relevant for toxic language classification.",there is a risk that human annotators might not label GPT-2-generated documents as toxic.,contrasting test_652,TABLE-BERT is a BERT-base model that similar to our approach directly predicts the truth value of the statement.,the model does not use special embeddings to encode the table structure but relies on a template approach to format the table as natural language.,contrasting test_653,"In language tasks, adversarial training brings wordlevel robustness by adding input noise, which is beneficial for text classification.",it lacks sufficient contextual information enhancement and thus is less useful for sequence labelling tasks such as chunking and named entity recognition (NER).,contrasting test_654,"Masked language model (Devlin et al., 2018) smooths this inconsistency by applying replacement of tokens for some data while masking the rest (equivalent to word dropout).","the replacement in masked language model is randomly chosen from the full vocabulary, but the substitution in real scenarios follows some distribution (e.g. replacing “Massachusetts” with a location name is more likely than an animal name), which is not considered in masked language model.",contrasting test_655,"Zhao et al. (2018) proposed the learning scheme to generate a gender-neutral version of Glove, called GN-Glove, which forces preserving the gender information in pre-specified embedding dimensions while other embedding dimensions are inferred to be gender-neutral.",learning new word embeddings for large-scale corpus can be difficult and expensive.,contrasting test_656,Table 4 shows that there are constant performance degradation effects for all debiasing methods from the original embedding.,our methods minimized the degradation of performances across the baseline models.,contrasting test_657,A clear-cut solution to this problem is to focus more on samples that are more relevant to the target task during pretraining.,"this requires a task-specific pretraining, which in most cases is computational or time prohibitive.",contrasting test_658,"Further, Moreo et al. (2019) concatenates label embedding with word embeddings.",this approach cannot be directly implemented into PLMs since the new (concatenated) embedding is not compatible with the pretrained parameters.,contrasting test_659,The growing interest in argument mining and computational argumentation brings with it a plethora of Natural Language Understanding (NLU) tasks and corresponding datasets.,"as with many other NLU tasks, the dominant language is English, with resources in other languages being few and far between.",contrasting test_660,The ZS and TT baselines are almost always outperformed by the best translate-train model.,"when a large-scale English corpus is available (Figure 2b), the TT baseline becomes comparable to the best translate-train models.",contrasting test_661,Dialogue policy learning for Task-oriented Dialogue Systems (TDSs) has enjoyed great progress recently mostly through employing Reinforcement Learning (RL) methods.,these approaches have become very sophisticated.,contrasting test_662,"With respect to success rate, DiaAdv manages to achieve the highest performance by 6% compared to the second highest method GDPL.",diaAdv is not able to beat GdPL in terms of average turns.,contrasting test_663,"As to DiaSeq, it can achieve almost the same performance as GDPL from different perspectives while GDPL has a slightly higher F1 score.",the potential cost benefits of DiaSeq are huge since it does not require a user simulator in the training loop.,contrasting test_664,"Beyond this, DiaMultiClass does not benefit from the increase in expert dialogues and starts to fluctuate between 55% and 59%.",di-aSeq can achieve higher performance when there are only 10% expert dialogue pairs and the success rate increases with the number of available expert dialogues.,contrasting test_665,The proposed methods can achieve state-of-the-art performance suggested by existing approaches based on Reinforcement Learning (RL) and adversarial learning.,"we have demonstrated that our methods require fewer training efforts, namely the domain knowledge needed to design a user simulator and the intractable parameter tuning for RL or adversarial learning.",contrasting test_666,Our evaluation settings hiding one of the three inputs to the MCQA models -are similar to Kaushik and Lipton 2018's partial input settings which were designed to point out the existence of dataset artifacts in reading comprehension datasets.,we argue that our results additionally point to a need for more robust training methodologies and propose an improved training approach.,contrasting test_667,"Among these, hyperedge replacement grammar (HRG) has been explored for parsing into semantic graphs (Habel, 1992;Chiang et al., 2013).","parsing with HRGs is not practical due to its complexity and large number of possible derivations per graph (Groschwitz et al., 2015).",contrasting test_668,"Drawing on this result, a recent work by Fancellu et al. (2019) introduces recurrent neural network RDGs, a sequential decoder that models graph generation as a rewriting process with an underlying RDG.",despite the promising framework the approach in FA19 2 falls short in several aspects.,contrasting test_669,Composition is constrained by the rank of a nonterminal so to ensure that at each decoding step the model is always aware of the placement of reentrant nodes.,we do not ensure semantic well-formedness in that words are predicted separately from their fragments and we do not rely on alignment information.,contrasting test_670,"Early CLWE approaches required expensive parallel data (Klementiev et al., 2012; Täckström et al., 2012).","later approaches rely on high-coverage bilingual dictionaries (Gliozzo and Strapparava, 2006; Faruqui and Dyer, 2014; or smaller ""seed"" dictionaries (Gouws and Søgaard, 2015; Artetxe et al., 2017).",contrasting test_671,"Multilingual BERT performs well on zero-shot cross-lingual transfer (Wu and Dredze, 2019; Pires et al., 2019) and its performance can be further improved by considering target-language documents through self-training (Dong and de Melo, 2019).",our approach does not require multilingual language models and sometimes outperforms multilingual BERT using a monolingual BERT student.,contrasting test_672,Neural network-based models augmented with unsupervised pre-trained knowledge have achieved impressive performance on text summarization.,"most existing evaluation methods are limited to an in-domain setting, where summarizers are trained and evaluated on the same dataset.",contrasting test_673,Bigpatent B also exhibits relatively higher copy rate in summary but the copy segments is shorter than CNNDM.,"bigaptent b, Xsum obtain higher sentence fusion score, which suggests that the proportion of fused sentences in these two datasets are high.",contrasting test_674,2) BART (SOTA system) is superior over other abstractive models and even comparable with extractive models in terms of stiffness (ROUGE).,it is robust when transferring between datasets as it possesses high stableness (ROUGE).,contrasting test_675,"Typical sources of transfer loss concern differences in features between domains (Blitzer et al., 2007;Ben-David et al., 2010).",other factors may govern model degradation for depression clas-sification.,contrasting test_676,"Topical nuances in language may appropriately reflect elements of identity associated with mental health disorders (i.e. traumatic experiences, coping mechanisms)","if not contextualized during model training, this type of signal has the potential to raise several false alarms upon application to new populations.",contrasting test_677,"Figure 3b shows that BiLSTM uses 35% of context for short sentences, 20% for medium, and only 10% for long sentences.",bERT leverages fixed 75% of context words regardless of the sentence length.,contrasting test_678,"VQA requires techniques from both image recognition and natural language processing, and most existing works use Convolutional Neural Networks (CNNs) to extract visual features from images and Recurrent Neural Networks (RNNs) to generate textual features from questions, and then combine them to generate the final answers.",most existing VQA datasets are created in a way that is not suitable as training data for real-world applications.,contrasting test_679,"VQA models following this setting take characteristics of all answer candidates like word embeddings as the input to make a selection (Sha et al., 2018; Jabri et al., 2016).","in the open-ended setting, there is neither prior knowledge nor answer candidates provided, and the model can respond with any freeform answers.",contrasting test_680,"We conjecture that this is because when we have limited amount of target data, having more prior knowledge is beneficial to model performance, while having more target data will make prior knowledge less helpful.",our method can stably improve the performance because it sufficiently makes use of target data and source data.,contrasting test_681,"Importantly, we do not define a precondition event as an absolute requirement for the target (the door opening) to occur in all scenarios.",we do require that the target event likely would not have occurred in the current context.,contrasting test_682,This reveals the source of improvement in attack success rate between GENETICATTACK and TEXTFOOLER to be more lenient constraint application.,"gE-NETICATTACK's genetic algorithm is far more computationally expensive, requiring over 40x more model queries.",contrasting test_683,"Gilmer et al. (2018) laid out a set of potential constraints for the attack space when generating adversarial examples, which are each useful in different real-world scenarios.",they did not discuss NLP attacks in particular.,contrasting test_684,"Object manipulation and configuration is another subject that has been studied along with language and vision grounding (Bisk et al., 2016;Wang et al., 2016;Li et al., 2016;Bisk et al., 2018).",most studies focus on addressing the problem in relatively simple environments from a third-person view.,contrasting test_685,It is possible for instructions to be written that can pass all automated checks and still be of poor quality.,there is no quick and reliable way to automatically check if an instruction passes the tests but is still vague or misleading.,contrasting test_686,"There are several benchmarks (Wen et al., 2017;El Asri et al., 2017;Eric and Manning, 2017;Wei et al., 2018) to evaluate the performance of neural models for goal-oriented dialog.","these benchmarks assume a world of a ""perfect"" user who always provides precise, con- cise, and correct utterances.",contrasting test_687,"Zhao and Eskenazi (2018) created SimDial, which simulates spoken language phenomena, e.g. self-repair and hesitation. Sankar et al. (2019) introduce utterance-level and wordlevel perturbations on various benchmarks.","such variations have been largely artificial and do not reflect the ""natural variation"" commonly found in naturally occuring conversational data.",contrasting test_688,"The conversational activity patterns (denoted by A) handle the main business of conversation, i.e. the user request and the services provided by agent.",conversation management patterns help the user and agent to manage the conversation itself.,contrasting test_689,"Since the models are evaluated only on the agent responses present in the original test set, additional user and agent utterances for incorporating natural variation do not affect performance Table 5: Ablation results for GLMP model on SMD too much.",sMD is a real-world dataset of human-to-human conversations collected by crowdsourcing and we observe a much higher drop across both BLEU and Ent F1 scores.,contrasting test_690,This resulted in some novelty in the data collected and prevented the user utterances to be repetitive.,"to control data collection, the participants were asked to follow a set of instructions which resulted in user utterances largely focused on the task.",contrasting test_691,"The dataset is the largest currently as it has largest context complexity and state complexity (based on all possible combinations of customer and agent context features, like number of flights in the database, number of airlines, airport codes and dialogue action states), in comparison to other existing datasets mentioned above.",the authors don't share details on how the dataset was collected and instructions provided to the participants,contrasting test_692,"As shown in Figure 2, a correlation exists between some relevant events (such as the first joint press release) and the number of articles published.","a higher volume of articles does not always correlate with higher disagreement rates between annotators: interestingly, it seems that some events (such as the merger agreement) spread more uncertainty around the merger than others (such as the start of the antitrust trial).",contrasting test_693,"Similar to our baselines ScRNN (Sakaguchi et al., 2017) and MUDE (Wang et al., 2019), Li et al. (2018) proposed a nested RNN to hierarchically encode characters to word representations, then correct each word using a nested GRU .","these previous works either only train models on natural misspellings (Sakaguchi et al., 2017) or synthetic misspellings , and only focus on denoising the input texts from orthographic perspective without leveraging the retained semantics of the noisy input.",contrasting test_694,"These LMs captures the probability of a word or a sentence given their context, which plays a crucial role in correcting real-word misspellings.","all of the LMs mentioned are based on subword embeddings, such as WordPiece (Peters et al., 2018) or Byte Pair Encoding (Gage, 1994) to avoid OOV words.",contrasting test_695,"XLNet (Yang et al., 2019) also marginalize over all possible factorizations.","their work is focused on the conditional distribution p(y|x), and they do not marginalize over all possible factorizations of the joint distribution.",contrasting test_696,KERMIT is a generative joint distribution model that also learns all possible factorizations.,"kERMIT is constrained to two languages, while MGLM is a generative joint distribution model across any/all languages/text while learning all possible factorizations of the joint distribution.",contrasting test_697,"Multilingual Neural Language Model (Wada and Iwata, 2018) uses a shared encoder and language-dependent decoders to generate word embeddings and evaluate word alignment tasks.",our work unifies the neural architecture with a straightforward stack of self-attention layers.,contrasting test_698,Our work focused on a specific instantiation of channels as languages.,mGLm is not limited to only languages and can generalize to other notions of channels.,contrasting test_699,"suggest a misleading ""PERSON"" label 19 because of their context features, so that an incorrect NER prediction is expected if treating the three types of syntactic information equally.","the syntactic constituents give strong indication of the correct label through the word ""Rights"" for a ""LAW"" entity.",contrasting test_700,"Recently, neural models play dominant roles in NER because of their effectiveness in capturing contextual information in the text without requiring to extract manually crafted features (Huang et al., 2015; Lample et al., 2016; Strubell et al., 2017; Zhang and Yang, 2018; Peters et al., 2018; Yadav and Bethard, 2018; Cetoli et al., 2018; Akbik et al., 2018, 2019; Chen et al., 2019; Devlin et al., 2019; Zhu and Wang, 2019; Liu et al., 2019b; Baevski et al., 2019; Yan et al., 2019; Xu et al., 2019a; Zhu et al., 2020; Luo et al.).","to enhance NER, it is straightforward to incorporate more knowledge to it than only modeling from contexts.",contrasting test_701,It is thus far more important to evaluate various seed configurations that various target documents.,"we wanted to keep the computational cost of evaluation reasonably small, so either the number of seed configurations had to be reduced or the number of target documents for each configuration.",contrasting test_702,"In a recent edition, Rabelo et al. (2019) used a BERT model fine-tuned on a provided training set in a supervised manner, and achieved the highest F-score among all teams.","due to the reasons discussed in Section 4, their approach is not consistent with the nearest neighbor search, which is what we are aiming for.",contrasting test_703,Deep neural models have achieved impressive success in many areas.,their interpretability and explainability have remained broadly limited,contrasting test_704,Such methods extract parts of the model input that are important to the output according to some criterion.,"they are not suited to evaluate NL explanations that are not part of the input, which motivates our new simulatability metric.",contrasting test_705,"Among others, dependency trees help to directly link the aspect term to the syntactically related words in the sentence, thus facilitating the graph convolutional neural networks (GCN) (Kipf and Welling, 2017) to enrich the representation vectors for the aspect terms.",there are at least two major issues in these graph-based models that should be addressed to boost the performance.,contrasting test_706,These statements represent generic commonsense hypotheses about social behaviors and their acceptability that are held as norms in a society.,such normative judgments can also be strengthened or weakened given appropriate context.,contrasting test_707,Hyperbolic spaces offer a mathematically appealing approach for learning hierarchical representations of symbolic data.,it is not clear how to integrate hyperbolic components into downstream tasks.,contrasting test_708,Many models that fuse visual and linguistic features have been proposed.,"few models consider the fusion of linguistic features with multiple visual features with different sizes of receptive fields, though the proper size of the receptive field of visual features intuitively varies depending on expressions.",contrasting test_709,Zhao et al. (2018) also proposes a model with a structure that fuses multiple scales and languages for weakly supervised learning.,"they use concatenation as the method of fusion, whereas we use FiLM.",contrasting test_710,The paraphrase ratio of the augmented training set remains similar as the original set.,the ratio increases in the augmented testing set indicating the paraphrase clusters are sparser in the testing set.,contrasting test_711,"Another line of work learns Hornclause style reasoning rules from the KG and stores them in its parameters (Rocktaschel and Riedel, 2017; Das et al., 2018; Minervini et al., 2020).",these parametric approaches work with a fixed set of entities and it is unclear how these models will adapt to new entities.,contrasting test_712,"For example, the performance (MRR) of ROTATE model (Sun et al., 2019) drops by 11 points (absolute) on WN18RR in this setting (§3.4).","we show that with new data, the performance of our model is consistent as it is able to seamlessly reason with the newly arrived data.",contrasting test_713,"Recent works (Teru et al., 2020;Wang et al., 2020) learn entity independent relation representations and hence allow them to handle unseen entities.",they do not perform contextual reasoning by gathering reasoning paths from similar entities.,contrasting test_714,"As a result, many of the query relations were different from what was present in the splits of NELL-995 and hence is not a good representative.",we report test results for the best hyper-parameter values that we got on this validation set.,contrasting test_715,"For sentences in GENIA, the number of candidate regions generated by HiRe is 77.9% less than that of the enumeration method discarding 1.3% long entities and more than that of (Zheng et al., 2019).","the true recall of candidate regions generated by the enumeration method and HiRe are 98.7% and 98.1%, respectively.",contrasting test_716,"HiRe without HRR employs Average Word Representation (denoted as AWR) instead with precision 78.3%, recall 73.7% and F1 measure 75.9%.","to HiRe AWR , the absolute F1 measure improvement of HiRe HRR is 0.6%.",contrasting test_717,Therefore some recent researches attempt to endow the bots with proactivity through external knowledge to transform the role from a listener to a speaker with a hypothesis that the speaker expresses more just like a knowledge disseminator.,"along with the proactive manner introduced into a dialogue agent, an issue arises that, with too many knowledge facts to express, the agent starts to talks endlessly, and even completely ignores what the other expresses in dialogue sometimes, which greatly harms the interest of the other chatter to continue the conversation.",contrasting test_718,Models facilitated with external knowledge indeed generate more meaningful responses than peers that train only on the source-target dialogue dataset.,"these models tend to fall into another situation where the machine agent talks too much ignoring what the other has said, let alone the inappropriate use of knowledge.",contrasting test_719,"What's more, with copy mechanism, CopyNet, DeepCopy, and Initiative-Imitate perform better in terms of fluency and coherence because of the utilizing of proper knowledge.","comparing CopyNet and DeepCopy with Seq2Seq attn , the Engagement becomes worse because too much knowledge harms the ability to react to the proposed ques-tion very likely.",contrasting test_720,"Lexical resources such as WordNet (Miller, 1995) capture such synonyms (say, tell) and hypernyms (whisper, talk), as well as antonyms, which can be used to refer to the same event when the arguments are reversed ([a] 0 beat [a] 1 , [a] 1 lose to [a] 0 ).","WordNet’s coverage is insufficient, in particular, missing contextspecific paraphrases (e.g. (hide, launder), in the context of money).",contrasting test_721,"During training, each encoder learns a language model specific to an individual MT source, yielding diversity among experts in the final system.","in order to improve robustness of each encoder to translation variability, inputs to each encoder are shuffled by some tuned probability p shuffle .",contrasting test_722,Previous LSTM-based ensemble approaches propose training full parallel networks and ensemble at the final decoding step.,we found this was too expensive given the nonrecurrent Transformer model.,contrasting test_723,Most current multi-hop relation reasoning models require a good amount of training data (fact triples) for each query relation.,"the relation frequency distribution in KB is usually longtail , showing that a large portion of relations only have few-shot fact triples for model training.",contrasting test_724,"We look into the task of generalizing word embeddings: extrapolating a set of pre-trained word embeddings to words out of its fixed vocabulary, without extra access to contextual information (e.g. example sentences or text corpus).","the more common task of learning word embeddings, or often just word embedding, is to obtain distributed representations of words directly from large unlabeled text.",contrasting test_725,"We omit the prediction time for KVQ-FH, as we found it hard to separate the actual inference time from time used for other processes such as batching and data transfer between CPU and GPU.",we believe the overall trend should be similar as for the training time.,contrasting test_726,"In this field, the supervised methods, ranging from the conventional graph models (McCallum et al., 2000; Malouf, 2002; McCallum and Li, 2003; Settles, 2004) to the dominant deep neural methods (Collobert et al., 2011; Huang et al., 2015; Lample et al., 2016; Gridach, 2017; Liu et al., 2018; Zhang and Yang, 2018; Jiang et al., 2019; Gui et al., 2019), have achieved great success.","these supervised methods usually require large scale labeled data to achieve good performance, while the annotation of NER data is often laborious and time-consuming.",contrasting test_727,"Then, it finetunes the model pretrained on the source task (with the output layer being replaced) using the re-annotated data to perform the target task.",it is worth noting that the NER labels of words are contextdependent.,contrasting test_728,"However, given that style transfer can be viewed as a monolingual machine translation (MT) task, and that seq2seq models such as the transformer have shown to outperform unsupervised methods in multi-lingual MT when a sufficiently large parallel corpus is available (Lample et al., 2018; Artetxe et al., 2019; Subramanian et al., 2018), in our opinion it is expected that seq2seq would outperform unsupervised approaches if parallel data is available for style transfer.","to the best of our knowledge, a parallel corpus for style transfer currently does not exist.",contrasting test_729,"Therefore, finding the value for STAcc is trivial once C has been found.","finding a value for C is the main issue for the metric, since it depends on evaluating the set of generated outputs how many of them were converted successfully.",contrasting test_730,"We show that in a data-rich setting, with sufficient training examples, our approach outperforms a classification-based encoder-only model.","our sequence-to-sequence model appears to be far more data-efficient, significantly outperforming BERT with few training examples in a data-poor setting.",contrasting test_731,We discuss this question in Section 5.4.,"as a preview, we find that the choice of target tokens has a large impact on effectiveness in some circumstances, and these experiments shed light on why T5 works well for document ranking.",contrasting test_732,"While the approach can exploit pretrained knowledge when fine-tuning the latent representations, the final mapping (i.e., the fully-connected layer) needs to be learned from scratch (since it is randomly initialized).","T5 can exploit both pretrained knowledge and knowledge gleaned from fine-tuning in learning task-specific latent representations as well as the mapping to relevance decisions; specifically, we note that T5 is pretrained with tasks whose outputs are ""true"" and ""false"".",contrasting test_733,"It has long been observed that most relation tuples follow syntactic regularity, and many syntactic patterns have been designed for extracting tuples, such as TEXTRUNNER (Banko et al., 2007) and ReVerb (Fader et al., 2011).","it is difficult to design high coverage syntactic patterns, although many extensions have been proposed, such as WOE (Wu and Weld, 2010), OLLIE (Mausam et al., 2012), ClausIE (Corro and Gemulla, 2013), Standford Open IE , PropS and OpenIE4 (Mausam, 2016).",contrasting test_734,"If BERT considers either name to be a common French name, then a correct answer is insufficient evidence for factual knowledge about the entity Jean Marais.","if neither Jean nor Marais are considered French, but a correct answer is given regardless, we consider it sufficient evidence of factual knowledge.",contrasting test_735,"Existing approaches to improve generalization in QA either are only applicable when there exist multiple training domains (Talmor and Berant, 2019;Takahashi et al., 2019; or rely on models and ensembles with larger capacity (Longpre et al., 2019;Su et al., 2019;.","our novel debiasing approach can be applied to both single and multi-domain scenarios, and it improves the model generalization without requiring larger pre-trained language models.",contrasting test_736,Mahabadi et al. (2020) handle multiple biases jointly and show that their debiasing methods can improve the performance across datasets if they fine-tune their debiasing methods on each target dataset to adjust the debiasing parameters.,the impact of their method is unclear on generalization to unseen evaluation sets.,contrasting test_737,"In addition, some works (Sukhbaatar et al., 2015;Madotto et al., 2018;Wu et al., 2019) have considered integrating KBs in a task-oriented dialogue system to generate a suitable response and have achieved promising performance.",these methods either are limited by predefined configurations or do not scale to large KBs.,contrasting test_738,"However, as the KBs continue to grow in the real-world scenarios, such end-to-end methods of directly encoding and integrating whole KBs will eventually result in inefficiency and incorrect responses.",some works may put the user utterances through a semantic parser to obtain executable logical forms and apply this symbolic query to the KB to retrieve entries based on their attributes.,contrasting test_739,"In the transformer, the representation of each query token gets updated by self-attending to the representations of all the query tokens and graph nodes in the previous layer.",the representation of each graph node gets updated by self-attending only to its graph neighbors according to the connections of the sparsely connected transformer as well as all query tokens.,contrasting test_740,"Thus, we explore the possibility of augmenting the user-generated data with synthetic data in order to train a better model.",one needs to be careful with data augmentation using synthetic data as it inevitably has a different distribution.,contrasting test_741,"However, when the size of task-related data is large enough, using a pre-trained model does not deliver much benefits (TS2 and TS all ft).","finetuning clearly improves the performance of TS2 pt in both human and automatic metrics, where using only 1k domain data already produces satisfying scores in human metrics.",contrasting test_742,Multi2 OIE yields the highest recall for all languages by approximately 20%p.,"argOE has relatively high precision, but low recall negatively impacts its F1 score.",contrasting test_743,Most of the systems available for French fit in those three approaches.,"none of these systems have been thoroughly compared to each other, even with the release, in 2013, of a large coreference annotated corpus (Muzerelle et al., 2014), since each system uses slightly different versions of the corpus for evaluation (e.g. different train and test sets).",contrasting test_744,"CROC uses a feature indicating whether a mention is a new entity in the text, i.e. whether a mention is the first in its coreference chain.",this feature is usually only available when the corpus has been previously annotated or after the coreference resolution task: this is why we removed it.,contrasting test_745,"It is difficult to study coreference errors of an end-to-end system, since it is not possible to fully separate mention misidentifications from coreference issues.",it allows a better understanding of error source.,contrasting test_746,One reason for this is that the newspapers and magazines that were used for our corpus tend to contain quite complex texts (political commentary and reports in historical German).,"we also observed some systematic difficulties to apply our annotation system that is rooted in narrative theory to journalistic writing, e.g.",contrasting test_747,"We generally follow this idea: reported is more summarizing and less precise, while indirect ST&WR can usually be read as a transformation of direct ST&WR that allows us to reconstruct the 'original' quote in more detail.",there are sentences that follow the typical structure of indirect ST&WRa framing clause and a dependent subordinate clause containing the content -but do not allow such a reconstruction,contrasting test_748,"There is a large number of tools and software packages providing access to data repositories such as NLTK (Loper and Bird, 2002) or Spacy 1 .",many of these resources are not powerful enough to exploit this data to their full extent.,contrasting test_749,Therefore it is normally only available through a web GUI hosted by the Institute of the Czech National Corpus.,we obtained a tabular text file with the Czech-German alignment on request.,contrasting test_750,Reddit users make less use of argumentation proposition types in general: they use less normative language than the candidates and express less desire than Republican candidates.,"they use reported speech often, partly because their discussions occurred after the debates had occurred.",contrasting test_751,We found that models trained through multi-task learning where the primary task consists of argument component classification and the secondary task consists of specificity classification almost always outperform models that only perform argument component classification.,the corpus used in our previous study is not publicly available and therefore our previous results are not reproducible by other members of the research community.,contrasting test_752,"In our prior work (Lugini and Litman, 2018) on argument component classification for discussions, we used oversampling to alleviate the class imbalance present in argumentation labels (which is also present in the Discussion Tracker corpus).","since our Discussion Tracker experiments also include 3 task multi-task learning, oversampling with respect to argumentation labels might have negative impact on other tasks.",contrasting test_753,These hypotheses are motivated by our observation of differences between collaboration label distributions across argumentative moves.,"given the different unit of analysis for the annotation of collaboration (turn) versus argumentation and specificity (argument discourse unit), for the multi-task learning setting the collaboration annotations have been converted to BIO format in order to have one annotation per argument move 2 .",contrasting test_754,Punctuation symbols often indicates segment boundaries.,there may be cases where EDUs are not segmented.,contrasting test_755,The baseline model fails to identify the comparative ‘enough . . . to' as a correlative and does not segment the sentence.,training for syntactic features allowed the model to correctly identify this construct and hence perform correct segmentation.,contrasting test_756,"As suspected, the baseline model performs poorly when the sentences are longer.",formulating the problem in an alternate fashion and injecting syntax make the model perform much better.,contrasting test_757,"Despite its simple mechanism, this algorithm comes with a high bias, which is unfavorable for learning new directions within the data.","multi-View-Training (Zhou and Goldman, 2004;Søgaard, 2010) tries to compensate this bias by different views of the data.",contrasting test_758,They collected explicit argument pairs with freely omissible discourse connectives which can be dropped independently of the context without changing the interpretation of the discourse relation.,sporleder and Lascarides (2008) argued training on explicit argument pairs was not a good strategy.,contrasting test_759,Note that Wu et al. (2017) collected explicit argument pairs using a similar method.,they only used argument pairs located within the same sentence while we do not apply this constraint.,contrasting test_760,"We also use a content extraction tool to extract article content from an HTML file, and apply a shingling-based method to identify near-duplicate articles.",our systems differ in two major ways.,contrasting test_761,"Furthermore, some datasets are available for evaluating additional dimensions of essay quality in English (Mathias and Bhattacharyya, 2018).","only a few evaluation datasets are available for Japanese writings, and even fewer Japanese learner essay datasets are.",contrasting test_762,"We created the feature-based models using linguistic features based on (Lee and Hasebe, 2017).",whether these features are enough to perform AES is unclear.,contrasting test_763,"As a result, the neural approach for AES has been actively studied in recent years (Taghipour and Ng, 2016).","no neural-network-based AES system is available for the Japanese language; furthermore, the BERT model has not been applied for an AES task with multiple dimensions thus far.",contrasting test_764,The feature-based model predicted a score that was two points lower than the actual content and organization trait scores in essay A and a score that was two points lower than the organization trait score in essay B.,the neural-network-based model predicted the score correctly for essay A and predicted a score that was only one point lower than the actual language trait score in essay B.,contrasting test_765,"Further, this model may provide a high score for an unexpected input.",the neural-network-based model predicted low scores for all columns.,contrasting test_766,"Reported results were obtained with traditional machine learning methods and, to some extent, it would be interesting to test more recent classification methods, such as deep neural networks.","the corpus might not be large enough for such an approach, which further motivates this kind of experiment.",contrasting test_767,"They proved to be a strong baseline in the binary classification task, outperforming the surface text-based features and the graph-based deep semantic features.","on the five-level classification task, they were outperformed by all other feature sets.",contrasting test_768,"To create such collections, there is a substantial need for automatic approaches that can distinguish the documents of interest for a collection out of the large collections (of millions in size) from Web Archiving institutions.","the patterns of the documents of interest can differ substantially from one document to another, which makes the automatic classification task very challenging.",contrasting test_769,The time-domain features offer a simple way to analyse audio signals and are directly extracted from the samples of the audio signal (waveform).,"frequencydomain features are extracted from the sound spectrum, a representation of the distribution of the frequency content of sounds (Giannakopoulos and Pikrakis, 2014d).",contrasting test_770,The corpus contains tweets annotated with 28 emotions categories and captures the language used to express an emotion explicitly and implicitly.,the availability of datasets created specifically for languages other than English is very limited.,contrasting test_771,"For several years, affect in speech has been encoded using discrete categories such as anger, sadness or neutral speech.","in many recent papers, researchers preferred using affective dimensions.",contrasting test_772,"Regarding the effect of hesitation on fundamental frequency (f0), a study on German spontaneous speech (Mixdorff and Pfitzinger, 2005) found no impact of hesitations marked by fillers on the overall f0 pattern at the utterance level.","a study relying on synthesized speech (Carlson et al., 2006) in Swedish showed a moderate effect of the f0 slope on perceived hesitation, as well as a moderate effect of the insertion of creaky voice.",contrasting test_773,Thus all adaptations of the speaking styles to different degrees of hesitation are individual as well and cannot be summed up as a group mean.,the tendencies of the individual changes remain similar across the group.,contrasting test_774,"Child language studies are crucial in improving our understanding of child well-being; especially in determining the factors that impact happiness, the sources of anxiety, techniques of emotion regulation, and the mechanisms to cope with stress.",much of this research is stymied by the lack of availability of large child-written texts.,contrasting test_775,"Valence was significantly negatively associated with arousal (r = -.06, p < .001), although the effect was small, suggesting minimal collinearity.",correlations with dominance (both A-D and D-V) were much stronger and significant (p < .001).,contrasting test_776,"Interpreting it at face value, we might conclude that the results reflect increased capabilities in emotion regulation (i.e., being more in control of one's emotions) (Zimmermann and Iwanski, 2014).","we are hesitant to make this conclusion because individual words likely have poor correspondence with emotion regulation, which involves complex processes.",contrasting test_777,"In our results, we observe that when maximum of the 3 CCCs computed on each pair is low, the predicted satisfaction is likely to be bad.","if this maximum is high, the predicted satisfaction is likely to be good.",contrasting test_778,It often carries both positive and negative feelings.,"since this label is quite infrequent, and not available in all subsets of the data, we annotated it with an additional Beauty/Joy or Sadness label to ensure annotation consistency.",contrasting test_779,"The results of the crowdsourcing experiment, on the other hand, are a mixed bag as evidenced by a much sparser distribution of emotion labels.","we note that these differences can be caused by 1) the disparate training procedure for the experts and crowds, and 2) the lack of opportunities for close supervision and on-going training of the crowds, as opposed to the in-house expert annotators.",contrasting test_780,Nostalgia is still available in the gold standard (then with a second label Beauty/Joy or Sadness to keep consistency).,"confusion, Boredom and Other are not available in any sub-corpus.",contrasting test_781,This line of work is predominantly based on word-level supervision.,we learn word ratings from document-level ratings.,contrasting test_782,"Mean Star Rating, Binary Star Rating, and Regressions Weights learn exclusively from the available document-level gold data.","one of the major advantages of the MLFFN is that it builds on pre-trained word embeddings, thus implicitly leveraging vast amounts of unlabeled text data.",contrasting test_783,Another seemingly obvious evaluation strategy would be to predict document-level ratings from derived word-level lexica using the empathic reactions dataset in a cross-validation setup.,we found that this approach has two major drawbacks.,contrasting test_784,"The clusters tend to be consistent regarding NE types, as illustrated in Table 3.",both false positives (FP; non-NE entries in NE clusters) as well as false negatives (FN; NE entries in non-NE clusters) do occur.,contrasting test_785,The transcription of the production as well as the target form are generally available in an orthographic form.,"if we are interested primarily in oral production and as this oral production is sometimes restricted to isolated words, it is ultimately more important to place the analysis at the phonological level.",contrasting test_786,"In this characterization, /p/ (+ coronal, + anterior) differs from /t/ (+ coronal) by one feature and similarly from /k/ (-anterior).","/t/ (+ coronal, + anterior) differs from /k/ (-coronal, -anterior) by two features, which is not very satisfactory from an articulatory point of view where it would seem logical to respect the order /ptk/, that is to say /t/ equidistant from /p/ and /k/, /p/ and /k/ being more distant.",contrasting test_787,The maximum precision score that Terrier reached is 0.92 indicating that the optimum of 1.0 was never achieved.,sODA reached the maximum score of 1.0 in five cases.,contrasting test_788,"Indeed lemmatization, stemming, numeral masking and entity masking did not improve results.",stop word filtering produces better word embeddings for this task.,contrasting test_789,"Currently, the serial corpora provided by NLPCC are the mainstream evaluation benchmarks for Chinese EL.","all of them stem from Chinese microblogs, which can be fairly short and noisy.",contrasting test_790,"Evidently, mentions in the Hard document are rather ambiguous, as Hinrich, Chandler can refer to many different entities.","easy document contains very obvious mentions such as BRICS, UN and the country names.",contrasting test_791,These comments are useful for the learner.,comments are noise for an evaluation dataset because automatic evaluation methods utilizing corrected sentences typically rely on the matching rate between the system output and the corrected sentences to calculate a score.,contrasting test_792,"In this model, we tokenized the learner sentence at the character level.",we tokenized a corrected sentence at the word level.,contrasting test_793,"Therefore, it turns out that a CNN-based method is effective for errors that can be corrected with only the local context (Chollampatt and Ng, 2018).","both the NMT and SMT systems could hardly correct errors that needed to be considered in context, for example, abbreviation or formal and casual style errors.",contrasting test_794,The number of true positives (TP) in the NMT system was larger than that in the SMT system.,the number of false positives (FP) in the NMT system was considerably larger than that in the SMT system.,contrasting test_795,Lang-8's original annotation contains annotator's comments that are noise for evaluation.,our evaluation corpus does not contain such comments.,contrasting test_796,"The second sentence does match the index query, so the full traversal is performed.","because there is an intervening xcomp relation, the traversal fails.",contrasting test_797,"In the above example, “आतंकी हमला” (terrorist attack) is a multiword event trigger with annotation labels B_Event and I_Event respectively.", ‘हमला' (attack) itself is an event trigger with annotation label B_Event.,contrasting test_798,"The recognition of medical concepts and their attributes in EEG reports is vital for many applications requiring data-driven representation of EEG-specific knowledge, including decision support systems.","the identification of the medical concepts in the EEG reports is not sufficient, as these concepts also exhibit clinically-relevant relations between them.",contrasting test_799,"Since the Morphology best defines the EEG activities, we decided to use it as an anchor for each mention of an EEG activity in the EEG report.","Morphology represents the type or ""form"" of an EEG activity, which may have multiple values, as seen in Table 2, therefore the Morphology remains also as an attribute of the EEG activities.",contrasting test_800,"In deciding the nodes of the HAD, we have consulted the Epilepsy Syndrome and Seizure Ontology (ESSO) 2 , which encodes 2,705 classes with an upper ontology targeting epilepsy and selected the concepts that best describe EEG activities.","eeG events, which are frequently mentioned in eeG reports as well, can be recognized only by identifying the text span where they are mentioned and their polarity and modality attributes.",contrasting test_801,"In a controlled laboratory environment, participants used the Crowdee platform for performing the summary quality evaluation task.","to the crowdsourcing study, all the participants were also instructed in a written form following the standard practice for laboratory tests.",contrasting test_802,Automatic analysis of connected speech by natural language processing techniques is a promising direction for diagnosing cognitive impairments.,"some difficulties still remain: the time required for manual narrative transcription and the decision on how transcripts should be divided into sentences for successful application of parsers used in metrics, such as Idea Density, to analyze the transcripts.",contrasting test_803,"Prosodic features have been shown to be very effective to discriminate between different types of sentence boundaries and in general their usage reflects better results (Shriberg et al., 2009;Huang et al., 2014;Khomitsevich et al., 2015).","to put prosodic features into practice we need alignments between the audio and its transcription, which is hard to obtain mainly due to the low quality of the recordings.",contrasting test_804,"From these results, we can assume that sequenced-figures narratives bring linguistic features also present in retellings, but the reverse direction is not true, as we can see in Table 5.","if a researcher will only work on retelling tasks, Table 5 shows that using only retelling datasets for training led to better results for the retelling task.",contrasting test_805,"For a script that is not phonetic, e.g., Chinese characters, grapheme-tophoneme conversion is considered compulsory.","as Hangul is phonetic, in other words, text in Hangul sounds as it is written, we stick with graphemes rather than converting them into phonemes.",contrasting test_806,"Much of the information about temples is available as text in the open web, which can be utilized to conduct such a study.","this information is not in the form of a learning resource, which can be readily used for such studies.",contrasting test_807,"On the one hand high quality resources are needed that contain (English) glosses, part of speech tagging as well as underlying morphophonemes forms.",the resource needs to be large enough for the wanted forms to occur in the data.,contrasting test_808,"On a positive side, 50% of the sentences were different from one seed strategy to the other, suggesting for an approach where strategies are mixed.",we also observed that (a) tends to yield more similar queries over time and (c) is too time-consuming for practical use.,contrasting test_809,"When English is the source language, and Japanese is the target language, there are only 5 pairs in the test data where the source and target words are identical, i.e., cases where the copy baseline is correct.","in the case of English being the target language, and Japanese being the source language, there are 270 pairs where the source and target words are identical.",contrasting test_810,Wikipedia is usually used as a high-quality freely available multilingual corpus as compared to noisier data such as Common Crawl.,"for the two languages under study, Wikipedia resulted to have too much noise: interference from other languages, text clearly written by non-native speakers, lack of diacritics and mixture of dialects.",contrasting test_811,"For example, a sentence describing the English cricket team's victory over India could invoke negative sentiment, given the annotator's strong support of the latter.",the actual label of such a statement would be positive because of the author's intention.,contrasting test_812,Logistic Regression offers marginally better performance than Linear-SVM in terms of precision (Precision for LR is 0.675).,the former fails to outperform the latter in the other three metrics of evaluation.,contrasting test_813,"The choice of approach has a fundamental effect on the end result: in the case of expansion (translation), the new wordnet will be fully meaning-aligned with the source language (English), which is ideal for cross-lingual uses: as most wordnets are already aligned with PWN, we get bilingual translations to all those languages 'for free'.",a certain linguistic bias is introduced by the fact that only meanings for which English lexicalisations exist will appear in the wordnet.,contrasting test_814,"A common form of sarcasm consists of a positive sentiment contrasted with a negative situation (Riloff et al., 2013), therefore it was likely that learning the emotional information of a text would facilitate the task of irony/sarcasm prediction.","it was seen that the majority of Persian Twitter users include either humor, irony or sarcasm in their posts.",contrasting test_815,"pre-trained a neural network model to predict emojis in the text and then transferred the model for different related tasks including sarcasm detection (Felbo et al., 2017).","with approaches that use feature engineering to extract features , in (Amir et al., 2016) features are automatically extracted by learning user embeddings which requires users' preceding messages.",contrasting test_816,Please note that we carry out our analysis on the whole corpus.," if one were interested only in the most reliable portions of the corpus, i.e. the cases all annotators agreed upon, different confidence thresholds can be set, as shown in Table 2.",contrasting test_817,"The high frequency of idioms in persuasive and rhetorical language corroborates the statements by McCarthy (1998) that idioms are used for commenting on the world, rather than describing it, and Minugh 2008, who finds that idioms are used most often by those with some authority, especially when conveying 'received wisdom'.",this genre distinction is still quite crude.,contrasting test_818,"Due to the fact that the majority of indigenous languages were traditionally exclusively oral cultures, manuscripts typically do not play a primary role in the study of those languages.","in particular handwritten notes that were created by researchers during fieldwork often are important information sources that, beyond other things, contain highly relevant information, ranging from object language data with attached translations and glossings, over lexical and grammatical descriptions to complex metadata, figural data and of course individual interpretation by the respective researcher.",contrasting test_819,The resulting derived resource on the one hand shows which data from the resource catalogue and which sessions from the corpora have been published in certain bibliographic items.,it demonstrates that some have actually been published in different bibliographic items.,contrasting test_820,A popular and widely method is crawling different web pages.,this is only possible under the assumption that there are sufficient web sites written in the target language.,contrasting test_821,"In some cases, the large number of hapax is related to a poor quality of the corpus that might be caused by spelling errors or the presence of foreign words (Nagata et al., 2018).","our scenario is expected given the agglutinative nature of the four target languages, so they might present a vast vocabulary diversity.",contrasting test_822,"Crowdsourcing platforms, such as Amazon Mechanical Turk, have been an effective method for collecting such large amounts of data.","difficulties arise when task-based dialogues require expert domain knowledge or rapid access to domain-relevant information, such as databases for tourism.",contrasting test_823,"Prior crowdsourced wizarded data collections have divided the dialogue up into turns and each worker's job consists of one turn utterance generation given a static dialogue context, as in the MultiWoZ dataset (Budzianowski et al., 2018).","this can limit naturalness of the dialogues by restricting forward planning, collaboration and use of memory that humans use for complex multi-stage tasks in a shared dynamic environment/context.",contrasting test_824,"The first WordNet for Bulgarian was built in the BalkaNet project (Koeva and Genov, 2004).",to this date the lexicon is not freely available.,contrasting test_825,". A free core-WordNet for Bulgarian was made available in the BulTreeBank Wordnet (Simov and Osenova, 2010), but unfortunately its size is rather small - 8 936 senses.","it has very good quality, so when we had to choose a translation for a word in English, we first looked for a corresponding synset in the BulTreeBank Wordnet.",contrasting test_826,"The dictionary is compatible with GF, and contains morphology and English-Bulgarian translations.",the translations are not sense annotated.,contrasting test_827,Sense annotations are available only for the words exemplified with that sentence.,"in order to provide good translations, we sense tagged all words in the corpus.",contrasting test_828,"The target audience of the digital literature on this platform is young people (roughly below 30 years of age), it thus has the potential of being biased in age.",this is the age group that is the most fluent in and most likely to use written Cantonese and therefore it reflects the current usage of written Cantonese in the society.,contrasting test_829,Authors like Brysbaert and New (2009) suggest that a size of about 15 millions tokens guarantees a robust estimation of the term frequencies (i.e. which correlates well with psycholinguistic measures),"because HKC is not particularly well resourced and that corpus data (especially spoken) is not always freely available, we made do with what is available at the time, and have much less than that target size.",contrasting test_830,"In order to handle these cases (which become more frequent as the resources to be modelled become more 'scholarly') we decided to make etymology a class, Etymology, in Part 3.","since etymologies usually represent an ordering of etymons (and, of course, etymons can be associated with more than one etymology and even more than one etymology for the same entry), we opted to create indirect rather than direct associations between etymologies and etymons.",contrasting test_831,"If a lemma differ from one source to another, we create multiple entries and disambiguate them manually based on the definitions obtained from other resources.",if the lemma is the same we fuse their information.,contrasting test_832,We need to enrich it and validate it by Old French or diachrony specialists.,the manual process is long and tedious especially when it comes to enrich the lexicon by decreasing the silence rate.,contrasting test_833,"In recent years, people have started investigating neologisms computationally (e.g. Ahmad (2000; Kerremans et al. (2011)), and online dictionaries and datasets provide convenient electronic versions of a word's year of first use.",these resources vary in the amount of information they provide and are often limited to a handful of languages.,contrasting test_834,"The post-editing speed here was lower, around 1.5K words/hour.","the proportion of tags edited, 1.8%, is only slightly higher.",contrasting test_835,"When given the same set of essays to evaluate and enough graded samples, AES systems tend to achieve high agreement levels with trained human raters (Taghipour and Ng, 2016).","there is a sizeable literature in cognitive science, psychology and other social studies offering evidence that biases can create situations that lead us to make decisions that project our experiences and values onto others (Baron, 2007).",contrasting test_836,"Here, the target language expression is typically either a translation or a definition.","these cannot be reliably distinguished in an automated way, so that target language information is best represented as a definition rather than as a translation.",contrasting test_837,"There are many existing Arabic corpora (Atwell, 2019).",we are only interested in those that include Hadith or classical Arabic text in general.,contrasting test_838,"In another study, a survey was conducted to enumerate the freely available Arabic corpora and stated the existence of one Hadith corpus.","it was not accessible, mentioned or used in the literature (Zaghouani, 2017).",contrasting test_839,The number of tokens in the English Hadiths is larger than the Arabic version.,the Arabic Hadiths are richer in vocabulary as it contains more unique words than the English version as shown in Table 4.,contrasting test_840,"Applications that allow users to interact with technology via spoken or written natural language are emerging in all areas, and access to language resources and open-source software libraries enables faster development for new domains and languages.",lT is highly language dependent and it takes considerable resources to develop lT for new languages.,contrasting test_841,"Because of the exploratory nature of the project and the type of information that has been collected, the language actor documentation is very detailed and often shaped by the organization and work environment of the specific institution.",some good candidates for facets did emerge from the data when we analyzed it specifically with this aim in mind.,contrasting test_842,"A straightforward approach would be to share the character level vocabulary between CJK languages, as it was possible between Chinese and Japanese.","this, unfortunately, is not a straightforward operation, as Hangul (the Korean writing system) is phonetic, unlike the other two examples.",contrasting test_843,The elaborate deep learning models created new standards in OCR.,"like any machine learning method, deep learning models also need training material.",contrasting test_844,"Like kraken, it allows the user to specify the structure of the neural network with VGSL.","to kraken, Tesseract is not GPU-enabled.",contrasting test_845,It became evident during our work that character error rates (CER) are a good indicator about the models’ ability to identifying characters correctly.,"for any further data processing which may include indexing or applying text mining techniques, the bag-of-words F1-measure provides a better picture of the systems' performances.",contrasting test_846,"Also, the Opus dataset is a much widely used parallel corpus resource in various researcher's works.","we observed that in both of these well-known parallel resources there are many repeated sentences, which may results into the wrong results (can be higher or lower) after dividing into train, validation, and test sets, as many of the sentences, occur both in train and test sets.",contrasting test_847,"Regardless of the MT approach applied, a MT system automatically generates an equivalent version (in some target language) of an input sentence (in some source language).","despite the huge effort of the MT community, it is not possible yet to generate a perfect completely automatic translation for unrestricted domains.",contrasting test_848,Finding definite references and their antecedents in the coreference resolution data is easy.,"as we described in the experiment section, it is difficult to make the correct answer rate be 50%, because most articles can be predicted using language models.",contrasting test_849,"For pro-drop languages like Japanese and Chinese, zero pronoun was known to be one of the most difficult problems and many specific extensions for baseline translation methods have been discussed in previous research (Taira et al., 2012;Kudo et al., 2014;Takeno et al., 2016;Wang et al., 2016;Wang et al., 2018).","it seems that contextaware neural machine translation can handle Japanese zero pronouns just as effectively as overt pronouns in Englishto-Russian translation (Voita et al., 2018).",contrasting test_850,They built a large-scale test set from German-English bilingual texts using coreference resolution and word alignment tools.,"to build a large-scale test set for Japanese zero pronouns, we have to develop accurate tools for Japanese empty category detection (zero pronoun identification) and Japanese coreference resolution, which remains open problems.",contrasting test_851,The results of the 6th WAT suggest that most sentences that are typical in TDDC and do not depend on context are translated correctly.,there are mistranslations in sentences that contain words that are not present in TDDC or whose meaning changes depending on the context.,contrasting test_852,The most comparable resource to the one presented here is the COPPA Corpus version 2 which contains around 13 million sentences.,"for other language pairs our corpus is larger, e.g. for there are 6.6M English/Japanese sentences while the JW300 corpus (Agic´ and Vulic, 2019) contains around 2.1M.",contrasting test_853,"As shown in Section 2.1, the standard NMT usually models a text by considering isolated sentences based on a strict assumption that the sentences in a text are independent of one another.",disregarding dependencies across sentences will negatively affect translation outputs of a text in terms of discourse properties.,contrasting test_854,"MADAMIRA (Pasha et al., 2014) combines MADA (Morphological Analysis and Disambiguation of Arabic) which is built on SAMA (Standard Arabic Morphological Analyser) and AMIRA (a morphological system for colloquial Egyptian Arabic).","to MADAMIRA, FSAM's rule-based system focuses on MSA templatic morphological analysis yielding root and pattern, generation and diacritization.",contrasting test_855,"On composing both FSTs, a weighted FST mapping surface form to weighted lexical forms will be generated.",if the second FST doesn't have a path for a certain analysis then the surface-form:analysis pair will be dropped.,contrasting test_856,"For example, named entities, cognates/loanwords, and morphologically complex words that contain multiple morphemes are extremely challenging to properly tokenise because the occurrences of such terms are rare even in large training datasets.",substrings of such terms are likely to be more frequent.,contrasting test_857,"Modern Standard Arabic (MSA), the official language of the Arab world, is well studied in NLP and has an abundance of resources including corpora and tools.","most Arabic dialects are considered under-resourced, with the exception of Egyptian Arabic (EGY).",contrasting test_858,Such models are well equipped to model some aspects of morphology implicitly as part of an end-to-end system without requiring explicit feature engineering.,"these models are very data-intensive, and do not scale down well in the case of low-resource languages.",contrasting test_859,The different analyzers provide minor or no improvements over the Neural Joint Model alone when embedding the candidate tags.,the ranking approach reduces the accuracy drastically for different combinations of analyzers.,contrasting test_860,"Indeed, in some languages (e.g., Basque) 100 words cannot cover even a single verb paradigm.","even in such restricted conditions some systems perform significantly better than others, the state-of-the-art approach is imitation learning via minimization of Levenshtein distance between the network output and the correct word form (Makarov and Clematide, 2018b).",contrasting test_861,"They were also extensively used in Najafi et al. (2018) system, that took the second place in Sigmorphon 2018 Shared Task.","they utilized the complete Unimorph data, which is sufficiently more than 1000 word forms used in our work.",contrasting test_862,"By restricting transformations to orthogonal linear mappings, VecMap and MUSE rely on the assumption that the monolingual embeddings spaces are approximately isomorphic (Barone, 2016).","it has been argued that this assumption is overly restrictive, as the isomorphism assumption is not always satisfied (Søgaard et al., 2018;.",contrasting test_863,"Not further improving the results for German, Czech and Italian languages might be because of the sufficiency of the target embedding usage for domain adaptation (Jurafsky and Martin, 2014) in those.","the fact that Spanish and French performances improved on both Wikipedia and Twitter domains when Dom-Drift is used, might show that the necessity of DomDrift can be related to certain property of the target language, which can be further explored as future work.",contrasting test_864,"Moreover, Turkish, Russian, Tigrigna, Polish, Uyghur, Croatian, Wolaytta, Bulgarian, German, Swedish are also characterized by high OOV rates.","mandarin, Thai, Hausa, Japanese, Vietnamese and English are characterized by low OOV rate.",contrasting test_865,"Frame-semantic parsers, however, are normally trained on manually annotated resources such as the FrameNet corpus (Baker et al., 1998) or the OntoNotes corpus (Pradhan and Xue, 2009;Weischedel et al., 2013).",such annotations only exist for a small subset of the world's languages.,contrasting test_866,"For both setups, we normalize labels to be conform with the PropBank (Palmer et al., 2005) notation (e.g., A1 becomes ARG1).","as shown in Figure 1, the experiments with the full label set have a slightly better accuracy than the ones with a simplified label set, so we will present only the results for the former.",contrasting test_867,"Given that CS is language-dependent, a corpus for each language pair is needed.","collecting CS corpora is a very challenging task, thus the collected, and available, corpora are very scarce and cover few language pairs.",contrasting test_868,The above-mentioned differences are statistically significant.,relatively large standard deviations of the metrics should be taken into account.,contrasting test_869,"More specifically, we exploit the fact that Twitter users can make use of Twitter screen names (e.g., @UserScreen-Name) in their tweet posts to mention other users, which provides us with unambiguous mentions.","e observe that many tweets also contain proper names (e.g. last names or acronyms for organizations) to refer to other Twitter users, thereby creating ambiguities about the user (entity) they refer to.",contrasting test_870,Note that we do not claim that all multimedia analysis work adopts an overly simplistic conceptualization of how text and images relate.,we find the lack of research on realistic connections between text and images is serious enough that it may hold back the state of the art in multimedia analysis for disaster management.,contrasting test_871,"In sum, our analysis reveals that the image caption to some extent describes the content of the image.",in many cases the caption provides additional information which is not conveyed by the image alone.,contrasting test_872,"Since the news articles in our collection are news reports, rather than editorials or feature articles, recency of what is reported in the text and depicted in the accompanying images is to be expected.",there is also a fair proportion where the image is less recent.,contrasting test_873,"Up until this point, we have investigated temporal distance.",we have an important point to make about spatial distance that emerged from our manual analysis.,contrasting test_874,Success at that task could ultimately also form the basis of an automatic annotation in the future.,"here we limit ourselves to a pilot, which provides a basic demonstration that the categories of news articles and images can be automatically distinguished.",contrasting test_875,"Further, articles about ongoing flooding often are associated with flood-related images.",it is not advisable to assume that a flood-related image will directly relate to a flooding-event described in the corresponding article.,contrasting test_876,"As in them, we constructed our questions and answers based on both textual and visual cues from short video clips.","unlike them, our proposed dataset relies on video clips that were recorded naturally by people, without predefined scripts.",contrasting test_877,"This insight is in line with our previous work (Schulte im Walde et al., 2016) which also demonstrated that empirical modifier properties do not have a consistent effect on the quality of predicting compound compositionality.","ranges zooming into the prediction results for compounds with high-, mid and low-productivity heads (see Table 6), we do observe patterns for compound subsets.",contrasting test_878,"This is probably one of the reasons why many studies that investigated idiomatic expressions, only collected limited information about idiom properties for very small numbers of idioms only.","this is problematic for research, because it hinders comparability of results.",contrasting test_879,"On the other hand, in concrete applications, crucial domain-specific entities need to be identified in a reliable way, such as designations of legal norms and references to other legal documents (laws, ordinances, regulations, decisions, etc.).","most NER solutions operate in the general or news domain, which makes them inapplicable to the analysis of legal documents (Bourgonje et al., 2017;Rehm et al., 2017).",contrasting test_880,We hypothesize that an increase in training data would yield better results for BiLSTM-CRF but not outperform transfer learning approach of MTL (or even BioBERT).,"to other common NER corpora, like CoNLL 2003 14 , even the best baseline system only achieves relatively low scores.",contrasting test_881,"It connects sentiment analysis and Natural Language Generation (Zhang et al., 2018a) and facilitates a lot of NLP applications such as fighting against offensive language in social media (Santos et al., 2018), news rewriting, and building controllable dialogue systems.",this task is difficult in practice due to the lack of parallel data (sentences with similar content but different sentiments).,contrasting test_882,It suggests that the semantic representation is also essential to preserve content.,the lack of semantic representation brings little decrease in sentiment transfer accuracy.,contrasting test_883,Natural Language Processing (NLP) can help unlock the vast troves of unstructured data in clinical text and thus improve healthcare research.,"a big barrier to developments in this field is data access due to patient confidentiality which prohibits the sharing of this data, resulting in small, fragmented and sequestered openly available datasets.",contrasting test_884,"Natural Language Processing (NLP) has enormous potential to advance many aspects of healthcare by facilitating the analysis of unstructured text (Esteva et al., 2019).",a key obstacle to the development of more powerful NLP methods in the clinical domain is a lack of accessible data.,contrasting test_885,We cannot say for certain whether using GPT-2 or EDA could positively impact our results.,it appears that our EDA baseline generally performs even worse than our Transformer and GPT-2 augmentations and the Original data itself for both MimicText-98 and MimicText-9.,contrasting test_886,"It is our hypothesis that these inaccuracies can provide an optimal amount of noise when using a model that has been pretrained on biomedical texts, thus allowing them to better generalise.",this noise proves too much for models that have only been pretrained on non-medical text.,contrasting test_887,This leads us to hypothesise that this task might be too easy and that even weaker models are able to relatively accurately identify the phenotypes of patients from their discharge summaries.,"we still note that our baseline models report the highest values across our metrics, especially our 'Original' data using the BioBERT model which reports the best accuracy and F1 scores.",contrasting test_888,"Recently, contextualized word embeddings such as BERT (Devlin et al., 2019) have largely improved the performance of NLP tasks compared to static embeddings such as word2vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014).","static embeddings are still frequently used to various studies for sentence embeddings (Yang et al., 2019;Almarwani et al., 2019) and even the other domains such as the extraction of interaction between drugs in the biomedical field (Sun et al., 2019).",contrasting test_889,These approaches effectively handle surface-variation of words such as inflected forms and typos.,this is only feasible when the roots of words are existed in a vocabulary.,contrasting test_890,The authors show that CamemBERT obtains significant improvements on many French tasks compared to the publicly available multilingual BERT.,"the architecture used in CamemBERT is different to BERT, which makes the comparison among the models less straightforward.",contrasting test_891,Such results show that mean smiling intensity seems to be a more robust criterium than the sole presence or absence of smile to evaluate the impact of smiling on the success or failure of humor.,"based on only four participants including one exception (MA), such a result cannot be more precise.",contrasting test_892,This result could indicate that mean smiling intensity has a larger impact of the success or failure of humor than the simple presence of smiling.,"such a result cannot be considered meaningful either because, this only holds for three participants.",contrasting test_893,"For example, an NMT system might translate the source sentence given in (1) as the viable Spanish translation given in (2).",in a given organizational context (3) or even (4) might instead be the approved translation.,contrasting test_894,"The current implementation only uses a flat list of nonhierarchical source-target term pairs (often referred to as a ""glossary"") to seed the injection process, which increases the likelihood of polysemic collisions.","terminology management practices involve much more complex termbases that express several types of lexical, domain, semantic, hierarchical (taxonomic), ontological, overlapping, and nesting relationships among terms.",contrasting test_895,"Overall, the percentage of rare words gets smaller as corpus size increases, as more and more words appear over 10 times.",the hyperparameters seem to have different effects on this value depending on corpus size as well.,contrasting test_896,The naming form distribution in the GTTC does not reflect the natural distribution of naming forms because we oversampled Dr.-containing tweets.,the relation between naming and stance should not be affected by oversampling.,contrasting test_897,"Finally, the political orientation of users was determined by a proxy that assumes that if you criticise right-wing politicians you are more likely to be left-leaning and vice versa.","of course, you can also criticise politicians from your own political spectrum.",contrasting test_898,A sample of titlecontaining tweets suggests that titles are not a definite signal that an otherwise positive-sounding German tweet is meant to be interpreted negatively (or vica versa).,"since the use of honorifics can be indicative of sarcasm (Liu et al., 2014), it is worth investigating whether the use of titles alongside explicit negative stance should be interpreted as sarcasm, and whether this sarcastic use plays a role in causing the weaker positive association with formal naming in left-leaning discourse.",contrasting test_899,"A typical application of classical SA in an industrial setting would be to classify a document like a product review into positive, negative or neutral sentiment polarity.","to SA, the more fine-grained task of Aspect-Based Sentiment Analysis (ABSA) (Hu and Liu, 2004;Pontiki et al., 2015) aims to find both the aspect of an entity like a restaurant, and the sentiment associated with this aspect.",contrasting test_900,"As we mentioned above, majority works have boiled down opinion mining to a problem of classifying whether a piece of text expresses positive or negative evaluation.","evaluation is, in fact, much more complex and multifaceted, which varies depending on linguistic factors, as well as participants of the communicative activity.",contrasting test_901,"Many polarity shifters can affect both positive and negative polar expressions, shifting them towards the opposing polarity.",other shifters are restricted to a single shifting direction.,contrasting test_902,"The dataset created in (Castro et al., 2018) was used by Castro et al. (2018) in the context of the HAHA 2018 competition for Humor Detection and Funniness Average Prediction","the dataset presented in (Castro et al., 2018) still presents some issues.",contrasting test_903,None of the systems could beat this baseline in terms of precision.,"the recall of this baseline is very low, because many humorous tweets are not written as dialogues, and that is why its F1 score is not that high.",contrasting test_904,"For a further development of this line of work, it is essential to construct a linguistically valid treebank on CG.","current corpora based on CG often do not take advantage of linguistically adequate analyses developed in the CG literature, mainly because these corpora are converted from existing resources which do not contain fine-grained annotation (Honnibal et al., 2010).",contrasting test_905,Information about a predicate's arguments are encoded only indirectly and their immediate accessibility depends on the precise type of PS treebank.,dependency structures (DS) abstract away from linear order and concentrate on encoding functional dependencies between the items of a clause.,contrasting test_906,One can include functional labels representing grammatical relations in a PS parser.,it has been shown that training a PS parser by including functional labels produces lower constituency parsing accuracy.,contrasting test_907,It includes an Urdu text corpus of 1.6 million words and a parallel English-Urdu corpus containing 200K words.,the Urdu EMILLE copora are unannotated with respect to grammatical structure.,contrasting test_908,"It seems that the main takeaway is ""the more, the better,"" as the top-scoring setup uses all five auxiliary treebanks.",we get a significantly stronger improvement from the constituency treebanks than from the dependency treebanks.,contrasting test_909,"For language-specific models and questions, such representations are often adequate and may even be preferable to the alternatives.","in multilingual models, the language-specific nature of phonemic abstractions can be a liability.",contrasting test_910,The usefulness of AlloVera for all purposes will increase as it grows to cover a broad range of the languages for which phonetic and phonological descriptions have been completed.,"to illustrate the usefulness of AlloVera, we will rely primarily on the zero-shot, universal ASR use-case in the evaluation in this paper.",contrasting test_911,"Then phonetic representations could be derived by first applying G2P to the orthographic text, then applying the appropriate transducer to the resulting phonemic representation.","constructing such a resource is expensive, requires several specialized skills on the part of curatorswho must encode the phonological environments in which allophones occur-and requires information that is often omitted from phonological descriptions of languages.",contrasting test_912,"Therefore, adding more languages can confuse the model, leading it to assign incorrect phonemes.",alloVera provides a consistent assignment across languages by using allophone inventories.,contrasting test_913,The universal phone inventory consists of all allophones in AlloVera.,the shared phoneme model could only generate inconsistent universal phonemes and the private phoneme model could only generate languagespecific phonemes.,contrasting test_914,The term allophone was coined by Benjamin Lee Whorf in the 1920s and was popularized by Trager and Block (1941).,"the idea goes back much further, to Baudoin de Courtenay (1894).",contrasting test_915,The row tagged with Full means that the whole training set was used to train the multilingual model.,the row with tag Low is trained under a low resource condition in which we only select 10k utterances from each training corpus.,contrasting test_916,"The recognition and automatic annotation of temporal expressions (e.g. Add an event for tomorrow evening at eight to my calendar) is a key module for AI voice assistants, in order to allow them to interact with apps (for example, a calendar app).","in the NLP literature, research on temporal expressions has focused mostly on data from the news, from the clinical domain, and from social media.",contrasting test_917,"Snips (Coucke et al., 2018) 5 is a crowdsourced dataset for the voice assistant domain, specifically for seven intents 6 , which is widely used for benchmarking NLU components of voice assistants.","no explicit details are provided on how the data was created or collected, and it does not appear to come from a real-world interaction with a voice assistant (sentences from Snips can at times be rather odd, albeit grammatical, e.g.",contrasting test_918,An hour is then annotated as a DURATION.,the reservation needs to be done for a specific punctual time that is not expressed here.,contrasting test_919,"Similar to Wang et al. 2019, on the one hand we annotate the reliance on factual knowledge, that is (Geo)political/Legal, Cultural/Historic, Technical/Scientific and Other Domain Specific knowledge about the world that can be expressed as a set of facts.","we denote Intuitive knowledge requirements, which is challenging to express as a set of facts, such as the knowledge that a parenthetic numerical expression next to a person's name in a biography usually denotes his life span.",contrasting test_920,"For the non-BERT models, DocQA utilizes the loss value from the answer span prediction to check answerability, while Read+Verifier introduces a new classifier for verifying the question and answer pair.",bERTbased models first pretrain deep bidirectional representations from large-scale unlabeled text without any explicit modeling for a specific task.,contrasting test_921,"These previous studies have mostly focused on factoid questions, each of which can be answered in a few words or phrases generated by understanding multimodal contents in a short video clip.",this problem definition of video question answering causes some practical limitations for the following reasons.,contrasting test_922,"Beyond these functionalities, which can in principle be achieved with large finite-state dialogue models, commercial assistants were not intended to support extended conversations on topics in the general domain, although their capacities for chitchat have been gradually increasing (Fang et al., 2018).","the systems known as chatbots were intended from the start to offer robust conversational capacities, although with a thin and often implicit knowledge base.",contrasting test_923,"Given the outstanding recent results of the BERT model applied to QA on the SQuAD dataset, we selected this model for our QA component.","the model is designed for answer extraction from a specific paragraph -with provision for the cases when the paragraph does not contain the answer -while we need to develop an end-to-end solution, starting directly from the document repository, without prior knowledge of the paragraphs relevant to each question.",contrasting test_924,These analyses confirm the possibility to use syntactic n-gram features in cross-lingual experiments to categorize texts according to their CEFR level (Common European Framework of Reference for Languages).,text length and some classical indexes of readability are much more effective in the monolingual and the multilingual experiments than what Vajjala and Rama concluded and are even the best performing features when the cross-lingual task is seen as a regression problem.,contrasting test_925,"Most of the works in this area have been focused on English as a second language where the needs are the most obvious (Condon, 2013;Weigle, 2013).","although knowing a lingua franca is important, being able to integrate oneself into another culture by communicating in its language is also important.",contrasting test_926,"In this experiment, the texts written in German are used to predict separately the CEFR level of the Italian and Czech texts.","as shown in Table 1, there are very large disparities between the three languages regarding the number of texts classified in each CEFR level.",contrasting test_927,The analyses support their conclusion that it is possible to use features like POStag or dependency ngrams to learn a predictive model on one language and then use it to categorize texts written in another language.,the complementary analyses suggest that the number of words in each text and some classical indexes of readability are more effective when the task is seen as a regression problem.,contrasting test_928,"It follows that making the code available in the form of a Docker image as it was required for REPROLANG 2020 is useful in order to ensure that the results can be reproduced, but it does not guarantee that these results correspond to the explanations given in the paper.","providing a Docker image makes it possible to find the versions of the programs and modules used, often necessary for reproducing exactly a study.",contrasting test_929,These patterns are helpful to find irony samples in a given corpus.,they cannot be used as irony detection algorithms due to their limited coverage.,contrasting test_930,We initially attempted to set this up as a labeling task by providing participants labels and definitions.,"we found it difficult for annotators to pinpoint differences between versions where edits modify or provide new information, in contrast to providing only stylistic changes.",contrasting test_931,"They are considered improvements over word models, and their effectiveness is usually judged with benchmarks such as semantic similarity datasets.",most of these datasets are not designed for evaluating sense embeddings.,contrasting test_932,"As mentioned in Section 3.2, GenSense-1 and SenseRetro-1, in which only first sense vectors are utilized, outperform their multi-sense counterparts in MEN, RW, WS353, and SCWS.",on MSD-1030 an opposite pattern is shown.,contrasting test_933,"Thus, semantic attribute vectors can effectively capture the commonalities and differences among concepts.","as semantic attributes have been generally created by psychological experimental settings involving human annotators, an automatic method to create or extend such resources is highly demanded in terms of language resource development and maintenance.",contrasting test_934,"Interestingly, the predicted attributes (originated from visual attributes annotated in ViSA) contributed to the performance improvement in VisSim and SemSim, outperforming not only the existing results but also that with fast-Text.",significant differences were not observed between the dir and ext tasks.,contrasting test_935,"So far, brain research has offered insight on the fact that we develop mental images for words learned.",what this fails to tell us is how they would look like visually.,contrasting test_936,"Abstract shapes contain descriptions like ""rectangle"" for Turkey or ""pointy"" for Somalia.",associations use other objects that look similar to the shape of the country as a reference.,contrasting test_937,This shows that people's size description generally matches the actual size of the country on the world map.,the location of the country seems to play an important role in the accuracy of people's descriptions.,contrasting test_938,These datasets allow modern machine learning techniques to glean insight from the massive amounts of textual data they contain.,"in the areas of humor classification and generation we find much smaller datasets, due to the complexity of humorous natural language.",contrasting test_939,"A task similar to that of (Hossain et al., 2019;Weller and Seppi, 2019) can be done with this dataset, where a model predicts the level of humor found in the joke in order to examine what characterizes humor.","due to Reddit's large scale and uneven distribution of upvotes, predicting the number of upvotes would be a sparse and difficult task.",contrasting test_940,"Many studies have been proposed in recent years to deal with online abuse, where swear words have an important role, providing a signal to spot abusive content.","as we can expect observing the different facets of swearing in social environments, the presence of swear words could also lead to false positives when they occur in a nonabusive context.",contrasting test_941,Zannettou et al. (2018) compared the behavior of troll accounts with a random set of Twitter users to analyze the influence of troll accounts on social media.,"new accounts can be opened at any time, and troll accounts can be suspended or deleted at any time.",contrasting test_942,"As such, we can conclude that these selected stylometric features can be successfully transferred from one language to another.",most of the stylometric features are language-dependent and will also rely on external natural language processing techniques.,contrasting test_943,"In tweet (7), the writer uses a joke to make an ironic statement about the social problem of the reluctance of young men to marry.","in example (8) and despite the use of the hashtag (""#irony""), the tweet is just a simple joke about a girl mosquito.",contrasting test_944,The only previous attempt at normalizing Italian social media data is from Weber and Zhekova (2016).,"they have a different scope of the task, mostly focusing on readability, not on normalization on the lexical level.",contrasting test_945,"However, perhaps surprisingly, when training on canonical data (ISDT), using predicted normalization on the input data leads to a slightly better performance compared to using gold.","the differences are very minor in this setting, and considering the size of the test data (100 tweets), we can not draw any conclusions from these results.",contrasting test_946,"Prior studies have analyzed how location affects the type of language that people use, often looking at text written by authors from different countries when exploring crosscultural differences (Poblete et al., 2011; Garcia-Gavilanes et al., 2013).",it is not always necessary to look at multiple countries in order to view different cultures.,contrasting test_947,"In most cases, the speech is clearly pronounced, well articulated and easy to understand.",oral history interviews are often recorded using conventional recording equipment that was common at the time of recording.,contrasting test_948,This setup also achieves slightly better results than the proposed approach on the Challenging Broadcast test set with the larger language model.,the gain is less than 0.2% relative.,contrasting test_949,"After launching the procedure of annotation, Analor creates another TextGrid file for every sound file containing a new tier of automatically segmented periods.",analor creates only one tier with periods but TextGrid files of manual annotation contain a tier for each speaker in every sound file.,contrasting test_950,"Works like Tacotron (Wang et al., 2017),Tacotron 2 (Shen et al., 2017), Deep Voice 3 (Ping et al., 2017) are capable of producing high quality natural speech.",all of these methods are data hungry and require approximately 24 hrs of text-to-speech data for a single speaker.,contrasting test_951,"This corpus was used to train text-to-speech systems for 13 languages were developed in (Pradhan et al., 2015).","in this corpus, the amount of data provided per language is far too less (≈25% of recent TTS datasets) for training recent neural network based systems that can produce natural, accurate speech.",contrasting test_952,"It can also be seen that the alignment curve shape is inferior in the case of Malayalam, and this is also reflected in the lower MOS scores for Malayalam.",hindi and Bengali has near perfect alignment curves which corresponds to the higher MOS scores that we get for these languages.,contrasting test_953,"Using a limited database of Mizo tones, the authors reported that pitch height and F0 slope can automatically classify Mizo tones into the four phonological categories with considerable accuracy of 70.28%.","this work had several shortcomings, firstly, the Mizo database used for the work was considerably small; secondly, the approach for identifying tones was threshold based and no statistical method was incorporated.",contrasting test_954,"Each tone combination consists of five unique phrases which were recorded three times by each speaker which outcome is 17, 280 phrases resulted in 54, 720 total tokens (19 speakers x 64 tonal combinations x 5 trisyllabic phrases x 3 monosyllables x 3 repetitions).","22, 770 tokens are not considered as these are the low tones derived from RTS which is not considered in the present work.",contrasting test_955,"The design of our corpus is based on S-JNAS, so we also used the ATR 503 sentences and JNAS newspaper article sentences as the script for our participants.","unlike S-JNAS, we used the ATR 503 sentences as training data and the newspaper article sentences as test data.",contrasting test_956,"For the S-JNAS corpus, each of the training data speakers read aloud two sets of ATR 503 sentences (about 100 sentences) and one set of the newspaper article sentences (about 100 sentences).","for our corpus, since many of our speakers are very elderly, and some have limited vision or a tendency towards dementia, we limited the number of sentenced we asked each participant to read in order to reduce the burden.",contrasting test_957,"Unlike in the other areas, there was insufficient coaching of the Tokushima participants by the recording staff, such as prompting them to read the text more carefully when they made mistakes, or having them re-read the text aloud when they made serious errors.","speech from the Yamagata speakers obtained the best recognition results, and this may have been because the average age of the participants was the lowest, at 73.4 years, which may have helped them to read aloud more fluently.",contrasting test_958,"The result shows that when dealing with American read material, the word error rate (WER) was 3.1% and when dealing with American/Canadian spontaneous speech, WER was 7.94%.","when the system was used to transcribe IE, WER was much higher and was 22.89%.",contrasting test_959,"In general, the scores on the manual pyramids are higher than on automatic pyramids as the average scores in table 5 show.","the high Pearson's correlation between quality scores on manual and automatic pyramids, especially when we use emb 2m, leads us to argue that this could be an issue of coverage.",contrasting test_960,Research on fact checking has benefited from large-scale datasets such as FEVER and SNLI.,such datasets suffer from limited applicability due to the synthetic nature of claims and/or evidence written by annotators that differ from real claims and evidence on the internet.,contrasting test_961,"In more extreme cases, the claim starts with a pronoun.","it may be necessary to know that before determining whether the claim is supported or not, because there may be multiple lines in the original evidence source that could be talked about.",contrasting test_962,"One way to generate refuted claims is to perform automatic claim negation using rulebased 'not' insertion based on syntax and part-of-speech (Bilu et al., 2015).","this would result in a handful of negation words appearing in the refuted claims by design, causing classifiers to exploit this pattern.",contrasting test_963,Another automated approach is to pick a different random claim from the dataset.,"a ""refuted"" claim chosen this way is likely to be topically dissimilar from the evidence file, rendering it not a useful negative example for the classifier.",contrasting test_964,"The crucial differentiating factor, then, might be the existence of antonyms: a claim is more likely to be refuted by e c if it contains even one antonym of a word in e c .",a claim c that is supported by e c should tend to have no antonyms.,contrasting test_965,"Some such data sets which have enabled the advancement of NLI (and fact verification) are SNLI (Bowman et al., 2015) MNLI (Williams et al., 2017), FEVER (Thorne et al., 2018), and FNC (Pomerleau and Rao, 2017).","these datasets are not devoid of biases (subtle statistical patterns in a dataset, which could have been introduced either due to the methodology of data collection or due to an inherent social bias).",contrasting test_966,"For example, in the second data point the bias of Clinton towards the label Agree (i.e., the percentage of data points where the entity Clinton cooccurred with the label Agree) is 63.15%.",the model trained on delexicalized data was able to predict the label with a lower bias (Disagree with 36.85%).,contrasting test_967,"In Thai, spaces are used to separate sentences.","they are used for other purposes as well, such as separating phrases, clauses, and listed items.",contrasting test_968,"To address this problem, ideally, the corpus needs to be extended to cover the target domain.","this usually comes with high costs and requires time, so it is often not feasible.",contrasting test_969,"We suppose that the language model used in this study, which is trained mainly on hotel reviews, could be the most beneficial for segmenting user-generated data in the hotel domain.",there is no annotated corpus for the hotel domain publicly available at the current time.,contrasting test_970,"Since for uttering responses with the high concreteness like ""response 6"" it is necessary to deeply consider the content of narratives, the degree of empathy shown by these responses tends to be high.","since it is not necessary to deeply consider the content of narratives for uttering responses with low concreteness like ""response 5,"" the degree of empathy shown by these responses tends to be low.",contrasting test_971,"Since responses of high versatility are uttered at various points in narrative speech, it is considered that these responses occur along with many other types of responses.","since responses of low versatility are uttered in fewer points in narrative speech, it is considered that these responses do not occur along with many other types of responses.",contrasting test_972,Empathy to narratives encourages a speaker to speak more only when the degree of the empathy is appropriate.,"when the degree of the empathy is not appropriate for the narrative, such empathy discourages the speaker.",contrasting test_973,"It offers flexibility to choose the type of annotation and labels as well as several other options during the annotation (e.g., sentence marking, break line and white space deletion).","it does not support multiple parallel annotators nor it is deployed in a server, thus, lacking the ability to track the annotator's progress and to flexibly work on different machines.",contrasting test_974,Doccano also allows for the setting of task-specific labels.,"only categorical labels are supported and the customization of these is also limited to annotation tasks with similar label requirements such as NER, sentiment analysis, and translation.",contrasting test_975,AWOCATo includes a customizable guideline page in HTML.,"to cater to users with limited HTML knowledge, the annotation guidelines can be created, for example, in Google Docs 13 , exported as HTML and stored in a predefined folder.",contrasting test_976,This complexity in the code is the artifact of supporting a very large number of features that are needed in certain cases.,"there are cases where all these features might not be necessary, for instance, researcher who are new to sequence-to-sequence (seq2seq) modeling might need more simpler codes to start with.",contrasting test_977,ESTNLTK library is an extendable collection of NLP utilities which use Text objects to communicate with each other.,practice showed that the original structure of Text objects was not easily extendable and we had to rethink how the information is stored and structured.,contrasting test_978,"Evaluation data was initially taken from the Estonian National Corpus (ENC) (Kallas and Koppel, 2018), which is the largest published collection of Estonian texts so far.",we discovered errors in one of its subcorpora.,contrasting test_979,"More generic specifications for component metadata are currently developed in coordination with Teanga development, and will be partially based on the Fintan ontology.","we plan to align our specifications also with those of the European Language Grid (ELG), 27 and thus anticipate a longer consolidation process and several cycles of revision until we arrive at stable specifications.",contrasting test_980,"Ideally these data should record interactions between real users and a dialogue system, or, if a dialogue system is not available (which is very common in the initial stages of development), interactions between real users and a Wizard (a human playing the role of the system), in a so called Wizard of Oz (WOz) setting (Dahlbäck et al., 1993).",this approach can be quite expensive and time consuming.,contrasting test_981,"Also, to take into account the fact that SU actions are generated based on a probability distribution, expected precision, expected recall, and expected accuracy are used (Georgila et al., 2006).","these metrics can be problematic because if a SU action is not the same as the user action in the reference corpus, this does not necessarily mean that it is a poor action.",contrasting test_982,Detecting and interpreting the temporal patterns of gaze behaviour cues is natural for humans and also mostly an unconscious process.,these cues are difficult for conversational agents such as robots or avatars to process or generate.,contrasting test_983,"The Wikisource contains the full text of the Twenty-Four Histories under the Creative Commons license, i.e., they could be freely used, re-distributed, and modified.",there are two limitations: the philological provenance and the current format.,contrasting test_984,"In this paper, we look to characterize phonotactics at the language level.","we use methods more typically applied to specific sentences in a language, for example in the service of psycholinguistic experiments.",contrasting test_985,We conclude that /x/ is in the consonant inventory of at least some native English speakers.,counting it on equal status with the far more common /k/ when determining complexity seems incorrect.,contrasting test_986,"Like final obstruent devoicing, vowel harmony plays a role in reducing the number of licit syllables.","to final obstruent devoicing, however, vowel harmony acts cross-syllabically.",contrasting test_987,"QDMR abstracts away the context needed to answer the question, allowing in principle to query multiple sources for the same question.","to semantic parsing, QDMR operations are expressed through natural language, facilitating annotation at scale by non-experts.",contrasting test_988,"QDMR is primarily inspired by SQL (Codd, 1970;Chamberlin and Boyce, 1974).","while SQL was designed for relational databases, QDMR also aims to capture the meaning of questions over unstructured sources such as text and images.",contrasting test_989,"We see compression in both types of contexts, which suggests that the cognitive load hypothesis is the more likely account.",these two hypotheses are not mutually exclusive.,contrasting test_990,"In these methods, syntactic-guidance is sourced from a separate exemplar sentence.",this prior work has only utilized limited syntactic information available in the parse tree of the exemplar sentence.,contrasting test_991,"Our task is similar in spirit to Iyyer et al. (2018) and Chen et al. (2019a), which also deals with the task of syntactic paraphrase generation.",the approach taken by them is different from ours in at least two aspects.,contrasting test_992,"Recent studies (e.g., Linzen et al., 2016; Marvin and Linzen, 2018; have explored this question by evaluating LMs' preferences between minimal pairs of sentences differing in grammatical acceptability, as in Example 1.","each of these studies uses a different set of metrics, and focuses on a small set of linguistic paradigms, severely limiting any possible bigpicture conclusions.",contrasting test_993,Marvin and Linzen (2018) expand the investigation to negative polarity item and reflexive licensing.,"these and related studies cover a limited set of phenomena, to the exclusion of well-studied phenomena in linguistics such as control and raising, ellipsis, quantification, and countless others.",contrasting test_994,"in the simple LM method, suggesting that the probabilities Transformer-XL assigns to the irrelevant part at the end of the sentence very often overturn the observed preference based on probability up to the critical word.","gPT-2 benefits from reading the whole sentence for BINDINg phenomena, as its performance is better in the simple LM method than in the prefix method.",contrasting test_995,Our benchmarks are exact reproducible in the sense that we provide the tables that record all model results (Section 3.3) and the code to run and evaluate our HPO algorithms (Section 6).,"they are not guaranteed to be broad reproducible, because the generalizability of the results might be restricted due to fixed collections of hyperparameter configurations, the variance associated with multiple runs, and the unknown best representative set of MT data.",contrasting test_996,"The preceding discussion shows that if we have a set of terminals that are anchors for the true nonterminals in the original grammar, then the productions and the (bottom-up) parameters of the associated productions will be fixed correctly, but it says nothing about parameters that might be associated to productions that use other nonterminals.",it is easy to show that under these assumptions there can be no other nonterminals.,contrasting test_997,"One strand of research looks at using the IO algorithm to train some heuristically initialized grammar (Baker, 1979;Lari and Young, 1990;Pereira and Schabes, 1992;de Marcken, 1999).","this approach is only guaranteed to converge to a local maximum of the likelihood, and does not work well in practice.",contrasting test_998,"For example, if Bob is a bakeoff organizer, he might want accuracy above 60% in order to determine whether to manually check the submission.","if Bob is providing ''MT as a service'' with strong privacy guarantees, he may need to provide the client with accuracy higher than 90%.",contrasting test_999,"Equipped with external knowledge and multi-task learning, our model can further reduce chaotic logic and meanwhile avoid repetition.",the analysis result illustrates that generating a coherent and reasonable story is challenging.,contrasting test_1000,"But the model does not succeed in the style transfer task, and simply learns to add the word doctors into layman sentences while almost keeping the other words unchanged; and adding the word eg into the expertise sentences.","it achieves good performance on all of the three ST measures, but makes little useful modifications.",reasoning test_1001,"Moreover, these two structure encoders are bidirectionally calculated, allowing them to capture label correlation information in both top-down and bottom-up manners.",hiAGM is more robust than previous top-down models and is able to alleviate the problems caused by exposure bias and imbalanced data.,reasoning test_1002,Note that DAG can be converted into a tree-like structure by distinguishing each label node as a single-path node.,the taxonomic hierarchy can be simplified as a tree-like structure.,reasoning test_1003,The key information pertaining to text classification could be extracted from the beginning statements.,we set the maximum length of token inputs as 256.,reasoning test_1004,"Regardless of their source, emails are usually unstructured and difficult to process even for human readers (Sobotta, 2016).",many approaches have been proposed for cleansing newsgroup and email data.,reasoning test_1005,"Importantly, dependency paths between content words do not generally contain function words.","by comparing paths across languages, differences in the surface realization are often masked, and argument structure and linkage differences emphasized.",reasoning test_1006,"Japanese and Korean are largely similar from the point of view of language typology (SOV word order, topic prominence, agglutinative morphology), but there are also important differences on the level of usage.","the adjective class in Korean is less productive, and translations often resort to relative clauses for the purposes of nominal modification.",reasoning test_1007,"The ""most common other path"" for both Russian and French is xcomp+nsubj, which is easy to explain: PUD corpora of these languages ""demote"" fewer auxiliary predicates than English (criteria for demotion are formulated in terms of superficial syntax and differ between languages) and more often place the dependent predicates as the root.",in constructions like he could do something the direct edge between the subject and the verb of the dependent clause is replaced with two edges going through the modal predicate.,reasoning test_1008,This solves the issue of complex categorical modeling but makes slot-filling dependent on an intent detector.,we propose a framework that treats slot-filling as a fully intentagnostic span extraction problem.,reasoning test_1009,"Second, the train-test splits of NICHE dataset contain same CNs since the splitting has been done using one paraphrase for each HS and its all original CNs, while CROWD train-test splits have a similar property since an exact same CN can be found for many different HSs.","the non-pretrained transformer models, which are more prone to generating an exact sequence of text from the training set, show a relatively better performance with the standard metrics in comparison to the advanced pre-trained models.",reasoning test_1010,"In fact, we observed that, after the output CN, the over-generated chunk of text consists of semantically coherent brand-new HS-CN pairs, marked with proper HS/CN start and end tokens consistent with the training data representation.","on top of CN generation for a given HS, we can also take advantage of the over-generation capabilities of GPT-2, so that the author module can continuously output plausible HS-CN pairs without the need to provide the HS to generate the CN response.",reasoning test_1011,"Although it seems more intuitive to focus on precision since we search for an effective filtering over many possible solutions, we observed that a model with a very high precision tends to overfit on generic responses, such as ""Evidence please?"".",we aim to keep the balance between the precision and recall and we opted for F1 score for model selection.,reasoning test_1012,The main goal of our effort is to reduce the time needed by experts to produce training data for automatic CN generation.,the primary evaluation measure is the average time needed to obtain a proper pair.,reasoning test_1013,"We performed some manual analysis of the selected CNs and we observed that especially for the Reviewer≥2 case (which was the most problematic in terms of RR and novelty) there was a significantly higher ratio of “generic” responses, such as “This is not true.” or “How can you say this about an entire faith?”, for which reviewers agreement is easier to attain.",the higher agreement on the generic CNs reveals itself as a negative impact in the diversity and novelty metrics.,reasoning test_1014,"In this case, generalizability of evaluation results becomes questionable.","our evaluation methodology needs to fulfill the following two requirements: (1) evaluation must not be performed on translational equivalents of the Source entries to which the model already had access during training (e.g., Sonnenschein and nuklear in our example from Figure 1); but, on the other hand, (2) a reasonable number of instances must be available for evaluation (ideally, as many as possible to increase reliability).",reasoning test_1015,"zh1 was created and is distributed using traditional Chinese characters, whereas the embedding model by Grave et al. (2018) employs simplified ones. ",we converted zh1 into simplified characters using GOOGLE TRANSLATE 6 prior to evaluation.,reasoning test_1016,"Note that some variants also produce different top MT outputs (o), as they were trained using different architectures or decoding algorithms.","we have four sets of DA annotations collected for 400 segments for system variants with different MT outputs: standard Transformer, Transformer with diverse beam search, MoE and ensembling.",reasoning test_1017,"Each domain has a set of slots; each slot can be assigned a value of the right type, a special DONTCARE marker indicating that the user has no preference, or a special “?” marker indicating the user is requesting information about that slot. ","we can summarize the content discussed up to any point of a conversation with a concrete state, consisting of an abstract state, and all the slot-value pairs mentioned up to that point.",reasoning test_1018,"For training time, ATS is roughly half of the multi-task methods on both Zh2En and En2Zh tasks.","compared with the multi-task methods, ATS can significantly reduce the model size and improve the training efficiency.",reasoning test_1019,"Furthermore, JNC does not provide full-text articles but only lead three sentences.","we take the latter strategy, removing non-entailment pairs from the supervision data for headline generation.",reasoning test_1020,"Recently, pretrained language models such as BERT (Devlin et al., 2019) show remarkable advances in the task of recognizing textual entailment (RTE) 8 .",we fine-tune pretrained models on the supervision data for entailment relation between source documents and their headlines.,reasoning test_1021,"However, no large-scale Japanese corpus for semantic inference (counterpart to MultiNLI) is available.","we created supervision data for entailment relation between lead three sentences and headlines (lead3headline, hereafter) on JNC.",reasoning test_1022,"Furthermore, we would like to confirm whether the filtering strategy can improve the truthfulness of the model.","we also report the support score, the ratio of entailment relation between source documents and generated headlines measured by the entailment classifiers (explained in Section 4.1), and human evaluation about the truthfulness.",reasoning test_1023,Those methods construct the representations of the context and response with a single vector space.,the models tend to select the response with the same words .,reasoning test_1024,"We treat these as positive instances, making the tacit assumption that in the data the agent's reply is always relevant given a user utterance.",the data lacks negative examples of irrelevant agent responses.,reasoning test_1025,"The original data is formatted as (dialogue, question, answer), which is not directly suitable for our goal since chatbots only concern about how to respond contexts instead of answering an additional question.",we ask human annotators to rewrite the question and answer candidates as response candidates.,reasoning test_1026,"Furthermore, the divide-and-conquer strategy greatly reduces the search space but introduces the projectivity restriction, which we remedy with a transition-based reordering system.",the proposed linearizer outperforms the previous state-of-the-art model both in quality and speed.,reasoning test_1027,"Without enough training examples, the classifier can hardly tell which relation the entity participates in.",the extracted triples are usually incomplete and inaccurate.,reasoning test_1028,"For example, the relation ""Work in"" does not hold between the detected subject ""Jackie R. Brown"" and the candidate object ""Washington"".","the object tagger for relation ""Work in"" will not identify the span of ""Washington"", i.e., the output of both start and end position are all zeros as shown in Figure 2.",reasoning test_1029,TRADE-OFF relations express a problem space in terms of mutual exclusivity constraints between competing demands.,"tradeoffs play a prominent role in evolutionary thinking (Agrawal et al., 2010) and are the principal relation under investigation in a significant portion of biology research papers (Garland, 2014).",reasoning test_1030,"Negative samples are important because possible trigger words can be contiguous, e.g., the phrase 'negative correlation' denotes a TRADE-OFF relation, whereas 'correlation' by itself does not.","the annotation of training examples is harder, and lexical and syntactic patterns that correctly signify the relation are sparse (Peng et al., 2017).",reasoning test_1031,"Previous methods primarily encode two arguments separately or extract the specific interaction patterns for the task, which have not fully exploited the annotated relation signal.",we propose a novel TransS-driven joint learning architecture to address the issues.,reasoning test_1032,"Different from TransE, we could not directly utilize TransS to recognize discourse relations, for that each argument could not be reused in discourse.",we exploit TransS to mine the latent geometric structure information and further guide the semantic feature learning.,reasoning test_1033,"However, the results imply that with the more encoder layers considered, the model could incur the over-fitting problem due to adding more parameters.",we adopt three encoder layers to encode the arguments as our Baseline in section 3.3.,reasoning test_1034,"However, comparable studies have yet to be performed for neural machine translation (NMT).",it is still unclear whether all translation directions are equally easy (or hard) to model for NMT.,reasoning test_1035,"In summary, BLEU only allows us to compare models for a fixed target language and tokenization scheme, i.e. it only allows us to draw conclusions about the difficulty of translating different source languages into a specific target one (with downstream performance as a proxy for difficulty).",bLEU scores cannot provide an answer to which translation direction is easier between any two source-target pairs.,reasoning test_1036,This small-scale manual analysis hints that DA scores are a valid proxy for CLDA.,we decided to treat them as reliable scores for our setup and evaluate our proposed metrics by comparing their correlation with DA scores.,reasoning test_1037,"showed that SANs in machine translation could learn word order mainly due to the PE, indicating that modeling cross-lingual information at position representation level may be informative.",we propose a novel cross-lingual PE method to improve SANs.,reasoning test_1038,"Our proposed model is motivated by the observation that although every sentence in the training data has a domain label, a word in the sentence does not necessarily only belong to that single domain.","we assume that every word in the vocabulary has a domain proportion, which indicates its domain preference.",reasoning test_1039,"We re- mark that the Transformer model, though does not have any explicit recurrent structure, handles the sequence through adding additional positional embedding for each word (in conjunction with sequential masking).","if a word appears in different positions of a sentence, its corresponding embedding is different.",reasoning test_1040,Recall that the Transformer model contains multiple multi-head attention modules/layers.,our proposed model inherits the same architecture and applies the word-level domain mixing to all these attention layers.,reasoning test_1041,"This is because the domain proportions are determined by the word embedding, and the word embedding at top layers is essentially learnt from the representations of all words at bottom layers.","when the embedding of a word at some attention layer is already learned well through previous layers (in the sense that it contains sufficient contextual information and domain knowledge), we no longer need to borrow knowledge from other domains to learn the embedding of the word at the current layer.",reasoning test_1042,"Training without domain labels shows a slight improvement over baseline, but is still significantly worse than our proposed method for most of the tasks.",we can conclude that our proposed domain mixing approach indeed improves performance.,reasoning test_1043,"For example, in the law domain, we find that ""article"" often appears at the beginning of a sentence, while in the media domain, the word ""article"" may appear in other positions.",varying domain proportions for different positions can help with word disambiguation.,reasoning test_1044,"We need to explicitly ""teach"" the model where to copy and where to generate.","to provide the model accurate guidance of the behavior of the switch, we match the target text with input table values to get the positions of where to copy.",reasoning test_1045,"In real-world problems, retrieval response sets usually have many more than 10 candidates.",we further test the selection and binary models on a bigger reconstructed test set.,reasoning test_1046,"With NOTA options in the training data, the models learn to sometimes predict NOTA as the best response, resulting in more false-positive isNOTA predictions at inference time.","also, by replacing various ground truths and strong distractors with NOTa, the model has fewer samples to help it learn to distinguish between different ground truths and strong distractors/ it performs less well on borderline predictions (scores close to the threshold).",reasoning test_1047,ConceptFlow learns to model the conversation development along more meaningful relations in the commonsense knowledge graph.,"the model is able to ""grow"" the grounded concepts by hopping from the conversation utterances, along the commonsense relations, to distant but meaningful concepts; this guides the model to generate more informative and on-topic responses.",reasoning test_1048,Softmax: We will discuss in §2.3 that annotators are expected to miss a few good responses since good and bad answers are often very similar (may only differ by a single preposition or pronoun).,"we explore a ranking objective that calculates errors based on the margin with which incorrect responses are ranked above correct ones (Collins and Koo, 2005).",reasoning test_1049,Each question is assigned 5 annotators.,there can be at most 5 unique annotated responses for each question.,reasoning test_1050,"Also, due to the lack of human-generated references in SQuAD-dev-test, we cannot use other typical generation based automatic metrics.",we use Amazon Mechanical Turk to do human evaluation.,reasoning test_1051,"While outputting answer-phrase to all questions is trivially correct, this style of response generation seems robotic and unnatural in a prolonged conversation.",we also ask the annotators to judge if the response is a completesentence (e.g. “it is in Indiana”) and not a sentencefragment (e.g. “Indiana”). ,reasoning test_1052,"This strong dependence on labeled data largely prevents neural network models from being applied to new settings or real-world situations due to the need of large amount of time, money, and expertise to obtain enough labeled data.","semi-supervised learning has received much attention to utilize both labeled and unlabeled data for different learning tasks, as unlabeled data is always much easier and cheaper to collect (Chawla and Karakoulas, 2011).",reasoning test_1053,"Despite the huge success of those models, most prior work utilized labeled and unlabeled data separately in a way that no supervision can transit from labeled to unlabeled data or from unlabeled to labeled data.","most semisupervised models can easily still overfit on the very limited labeled data, despite unlabeled data is abundant.",reasoning test_1054,"By model latency analysis 2 , we find that layer normalization (Ba et al., 2016) and gelu activation (Hendrycks and Gimpel, 2016) accounted for a considerable proportion of total latency.",we propose to replace them with new operations in our MobileBERT.,reasoning test_1055,"Progressive Knowledge Transfer One may also concern that if MobileBERT cannot perfectly mimic the IB-BERT teacher, the errors from the lower layers may affect the knowledge transfer in the higher layers.",we propose to progressively train each layer in the knowledge transfer.,reasoning test_1056,"Furthermore, none of the curves exhibit any signs of convergence even after drawing orders of magnitude more samples (Figure 3); the estimated model perplexities continue to improve.",the performance of these models is likely better than the originally reported estimates.,reasoning test_1057,"While this work helps clarify and validate existing results, we also observe that none of the estimates appear to converge even after drawing large numbers of samples.","we encourage future research into obtaining tighter bounds on latent LM perplexity, possibly by using more powerful proposal distributions that consider entire documents as context, or by considering methods such as annealed importance sampling.",reasoning test_1058,"Yet, in a semi-supervised learning setting where we already have GT labels, we need novel QA pairs that are different from GT QA pairs for the additional QA pairs to be truly effective.","we propose a novel metric, Reverse QAE (R-QAE), which is low if the generated QA pairs are novel and diverse.",reasoning test_1059,"However, QAE only measures how well the distribution of synthetic QA pairs matches the distribution of GT QA pairs, and does not consider the diversity of QA pairs.","we propose Reverse QA-based Evaluation (R-QAE), which is the accuracy of the QA model trained on the human-annotated QA pairs, evaluated on the generated QA pairs.",reasoning test_1060,We tune each layer for n epochs and restore model to the best configuration based on validation loss on a held-out set.,the model retains best possible performance from any iteration.,reasoning test_1061,"However, a key limitation of prior work is that authorship obfuscation methods do not consider the adversarial threat model where the adversary is ""obfuscation aware"" (Karadzhov et al., 2017;Mahmood et al., 2019).","in addition to evading attribution and preserving semantics, it is important that authorship obfuscation methods are ""stealthy"" -i.e., they need to hide the fact that text was obfuscated from the adversary.",reasoning test_1062,"The quality and smoothness of automated text transformations using the state-of-the-art obfuscators differ from that of human written text (Mahmood et al., 2019).",the intuition behind our obfuscation detectors is to exploit the differences in text smoothness between human written and obfuscated texts.,reasoning test_1063,The language model has a critical role.,we use neural language models with deep architectures and trained on large amounts of data which are better at identifying both long-term and short-term context.,reasoning test_1064,The evaded documents are those where the modification strategy somehow crossed an implicit threshold for evading authorship attribution.,we surmise that the evaded documents are likely to be relatively less smooth.,reasoning test_1065,We have no real world scenario to mimic in that we have not encountered any real world use of automated obfuscators and their outputs.,we make the datasets under a reasonable assumption that original documents are in the vast majority.,reasoning test_1066,"However, without the audio recordings, proficiency scoring must be performed based on the text alone.",robust methods for text-only speech scoring need to be developed to ensure the reliability and validity of educational applications in scenarios such as smart speakers.,reasoning test_1067,"Further research is needed to improve machine assessment at the upper and lower ends of the scoring scale, although these are the scores for which the least training data exists.","future work could include different sampling methods, generation of synthetic data, or training objectives which reward models which are less conservatively drawn to the middle of the scoring scale.",reasoning test_1068,"In our case, we would expect that when users look for academic papers, the papers they view in a single browsing session tend to be related.","accurate paper embeddings should, all else being equal, be relatively more similar for papers that are frequently viewed in the same session than for other papers.",reasoning test_1069,"We test different embeddings on the recommendation task by including cosine embedding distance 9 as a feature within an existing recommendation system that includes several other informative features (title/author similarity, reference and citation overlap, etc.).",the recommendation experiments measure whether the embeddings can boost the performance of a strong baseline system on an end task.,reasoning test_1070,"Moreover, current methods for KG construction often rely on the rich structure of Wikipedia, such as links and infoboxes, which are not available for every domain.","we ask if it is possible to make predictions about, for example, new drug applications from raw text without the intermediate step of KG construction.",reasoning test_1071,"While our goal is to require almost no human domain expertise to learn a good model, the size of validation data is much smaller than the size of the training data.",this effort-if helpful-may be feasible,reasoning test_1072,"However, this fine-tuning for multimodal language is neither trivial nor yet studied; simply because both BERT and XLNet only expect linguistic input.","in applying BERT and XLNet to multimodal language, one must either (a) forfeit the nonverbal information and fine-tune for language, or (b) simply extract word representations and proceed to use a state-of-the-art model for multimodal studies.",reasoning test_1073,"In essence, it randomly samples multiple factorization orders and trains the model on each of those orders.",it can model input by taking all possible permutations into consideration (in expectation).,reasoning test_1074,"As the first elementZ M CLS represents the [CLS] token, it has the information necessary to make a class label prediction.",",Z M CLS goes through an affine transformation to produce a single real-value which can be used to predict a class label.",reasoning test_1075,"Similarly for XLNET category, the results for MulT (with XLNet embeddings), XLNet and MAG-XLNet are as follows: [84.1, 83.7] for MulT, [85.4, 85.2] for XLNet and [85.6, 85.7] for MAG-XLNet.",superior performance of MAG-BERT and MAG-XLNet also generalizes to CMU-MOSEI dataset.,reasoning test_1076,"One exception, the long-running TV show Whose Line Is It Anyway, has, despite a large number of episodes, surprisingly little continuous improvised dialogue, due to the rapid-fire nature of the program.",we set our objective as collecting yesand-type dialogue pairs (yes-ands) to enable their modeling by corpus-driven dialogue systems.,reasoning test_1077,"An adequate evaluation of our models requires assessing the main yes-and criteria: agreement with the context and the quality of the new relevant contribution, both of which are not feasible with the aforementioned metrics.",we ask human evaluators to compare the quality of the yes-ands generated by various models and the actual response to the prompt in SPOLIN that is used as the input.,reasoning test_1078,"This is due to the aforementioned fact that they often take short-cuts to directly reach the goal, with a significantly short trajectory.",the success rate weighted by inverse path length is high.,reasoning test_1079,"Plus, a model that performs well on only one condition but poorly on others is not practically useful.","to measure the robustness among conditions, we calculate the variance of accuracy under all conditions in a task.",reasoning test_1080,"However, these methods require significant computational resources (memory, time) during pretraining, and during downstream task training and inference.",an important research problem is to understand when these contextual embeddings add significant value vs. when it is possible to use more efficient representations without significant degradation in performance.,reasoning test_1081,"In particular, we assume that the prior covariance function for the GP is determined by the pretrained embeddings, and show that as the number of observed samples from this GP grows, the posterior distribution gives diminishing weight to the prior covariance function, and eventually depends solely on the observed samples.","if we were to calculate the posterior distribution using an inaccurate prior covariance function determined by random embeddings, this posterior would approach the true posterior as the number of observed samples grew.",reasoning test_1082,This encoder is also the most lightweight.,we use it for the majority of our experiments.,reasoning test_1083,The resulting quantization function Q has no gradient towards the input query vectors.,"we use the straight-through estimator (Bengio et al., 2013) to compute a pseudo gradient.",reasoning test_1084,"We simply compute the embedding vector for the j th dimension of the i th entity as: The final entity embedding vector e i is achieved by the concatenation of the embedding vectors for each dimension: Non-linear Reconstruction (NL): While the codebook lookup approach is simple and efficient, due to its linear nature, the capacity of the generated KG embedding may be limited.",we also employ neural network based non-linear approaches for embedding reconstruction.,reasoning test_1085,A major limitation of deep learning is the need for huge amounts of training data.,"when dealing with low resource datasets, transfer learning is a common solution.",reasoning test_1086,Notice that z is a sequence of embedding vectors.,"the output of the FCN is also a sequence of vectors, where each of them tries to estimate the embedding of the corresponding word in the input sentence.",reasoning test_1087,"Due to the incorporation of bidirectional attention, masked language model can capture the contextual information on both sides.",it usually achieves better performances when finetuned in downstream NLU tasks than the conventional autoregressive models.,reasoning test_1088,"Practically, this theorem suggests the failure of bootstrapping (Efron, 1982) for statistical hypothesis testing and constructing confidence intervals (CIs) of the expected maximum, since the bootstrap requires a good approximation of the CDF (Canty et al., 2006).","relying on the bootstrap method for constructing confidence intervals of the expected maximum, as in Lucic et al. (2018), may lead to poor coverage of the true parameter.",reasoning test_1089,"We find that across all runs, the LFR is 100% and the clean accuracy 92.3%, with a standard deviation below 0.01%.",we conclude that the position of the trigger keyword has minimal effect on the success of the attack.,reasoning test_1090,"We present Enhanced WSD Integrating Synset Embeddings and Relations (EWISER), a neural supervised architecture that is able to tap into this wealth of knowledge by embedding information from the LKB graph within the neural architecture, and to exploit pretrained synset embeddings, enabling the network to predict synsets that are not in the training set.","we set a new state of the art on almost all the evaluation settings considered, also breaking through, for the first time, the 80% ceiling on the concatenation of all the standard allwords English WSD evaluation benchmarks.",reasoning test_1091,"Since the general-language corpus is web-crawled, it obviously contains a certain amount of domainspecific texts as well; especially if a highly technical term is not ambiguous, the general-language corpus contains only such contexts.",the general-language and domain-specific contexts are maximally similar in these cases.,reasoning test_1092,"the ranker takes a (question, answer) pair and a review as its input and calculates a ranking score s. ",it can rank all reviews for a given QA pair.,reasoning test_1093,"Product aspects usually play a major role in all of product questions, answers and reviews, since they are the discussion focus of such text content.",such aspects can act as connections in modeling input pairs of qa and r via the partially shared structure.,reasoning test_1094,"the ranker is trained based on the rewards from the generation, which is used for instance augmentation in S."," the training set S is updated during the iterative learning, starting from a pure (question, answer) set.",reasoning test_1095,"World Englishes exhibit variation at multiple levels of linguistic analysis (Kachru et al., 2009).","putting these models directly into production without addressing this inherent bias puts them at risk of committing linguistic discrimination by performing poorly for many speech communities (e.g., AAVE and L2 speakers).",reasoning test_1096,"One possible explanation for the SQuAD 2.0 models' increased fragility is the difference in the tasks they were trained for: SQuAD 1.1 models expect all questions to be answerable and only need to contend with finding the right span, while SQuAD 2.0 models have the added burden of predicting whether a question is answerable.","in SQuAD 1.1 models, the feature space corresponding to a possible answer ends where the space corresponding to another possible answer begins, and there is room to accommodate slight variations in the input (i.e., larger individual spaces).",reasoning test_1097,The diminished effectiveness of the transferred adversaries at inducing model failure is likely due to each model learning slightly different segmentations of the answer space.,"different small, local perturbations have different effects on each model.",reasoning test_1098,NNS and VBG also happen to be uncommon in the original distribution.,we conjecture that the models failed (Section 4) because MORPHEUS is able to find the contexts in the training data where these inflections are uncommon.,reasoning test_1099,"Although we agree that adding a GEC model before the actual NLU/translation model would likely help, this would not only require an extra model-often another Transformer (Bryant et al., 2019)-and its training data to be maintained, but would also double the resource usage of the combined system at inference time.",institutions with limited resources may choose to sacrifice the experience of minority users rather than incur the extra maintenance costs.,reasoning test_1100,"For example, in Fig. 1, the sentiment word ""good"" is highlighted, but other useful clues such as ""but"" and ""not"" do not gain sufficient attentions, which may not be optimal for learning accurate text representations.","a dynamically learnable degree of ""hard"" or ""soft"" for pooling may benefit text representation learning.",reasoning test_1101,"In contrast, if p is smaller, the attentions are more distributed, which indicates the attentive pooling is ""softer"".","in this manner, our APLN model can automatically explore how ""hard/soft"" the attention should be when constructing text representations, which may help recognize important contexts and avoid the problem of over-emphasizing some features and not fully respecting other useful ones, both of which are important for learning accurate text representations.",reasoning test_1102,"Unfortunately, in most cases the training of APLN is unstable if we directly use it for pooling.",we propose two methods to ensure the numerical stability of the model training.,reasoning test_1103,"This may be because when p > 1, our model has the risk of gradient explosion.",the scale of input features should be limited.,reasoning test_1104,"This is probably because a large value of p will lead to sharp attentions on critical contexts, and other useful information is not fully exploited.",the performance is also not optimal.,reasoning test_1105,This is probably because the rating of a review is usually a synthesis of all opinions conveyed by it.,it may not be optimal for learning accurate text representations if only salient contexts are considered.,reasoning test_1106,"For a proper evaluation of different auxiliary datasets, hyperparameter search and training runs with multiple random seeds have to be performed for each auxiliary dataset individually.",the process takes even longer and uses even more computational resources.,reasoning test_1107,"Because the process of selecting the closest vector representation from the main dataset to the auxiliary dataset or vice versa can result in different combinations, the counts in the contingency table will be different depending on the direction.","for a symmetric similarity measure like NMI, two scores are obtained.",reasoning test_1108,We speculate that long sentences often contain more ambiguous words.,"compared with short sentences, long sentences may require visual information to be better exploited as supplementary information, which can be achieved by the multi-modal semantic interaction of our model.",reasoning test_1109,Previous works impose a too strong constraint on the matching and lead to many counterintuitive translation pairings.,we propose a relaxed matching procedure to find a more precise matching between two languages.,reasoning test_1110,This 1 to 1 constraint brings out many redundant matchings.,"in order to avoid this problem, we relax the constraint and control the relaxation degree by adding two KL divergence regularization terms to the original loss function.",reasoning test_1111,"(2) As q grows larger, the average number of decoding steps (""Step"") increases steadily because the model is misled that to generate then delete a repetitive segment is expected.",q should not be too large.,reasoning test_1112,(4) The model achieves the best performance with q = 0.5.,we set q = 0.5 in our experiments.,reasoning test_1113,"Although accelerating the decoding process significantly, NAT suffers from the multimodality problem (Gu et al., 2018) which generally manifests as repetitive or missing tokens in translation.",intensive efforts have been devoted to alleviate the multi-modality problem in NAT.,reasoning test_1114,"For instance, in North America, ""much less than 1% of SMS messages were spam"" (Almeida et al., 2013).",the active learning model should be more sensitive to spam samples.,reasoning test_1115,"If automatic ICD coding models ignore such a characteristic, they are prone to giving inconsistent predictions.",a challenging problem is how to model the code hierarchy and use it to capture the mutual exclusion of codes.,reasoning test_1116,"Meanwhile, the graph has been proved effective in modeling data correlation and the graph convolutional network (GCN) enables to efficiently learn node representation (Kipf and Welling, 2016).",we devise a code co-occurrence graph (co-graph) for capturing Code Co-occurrence and exploit the GCN to learn the code representation in the co-graph.,reasoning test_1117,Effectively learning the document information about multiple labels is crucial for MLC.,"we propose to connect CNN and RNN in parallel to capture both local and global contextual information, which would be complementary to each other.",reasoning test_1118,"Compressing capsules into a smaller amount can not only relieve the computational complexity, but also merge similar capsules and remove outliers.",hyperbolic compression layer is introduced.,reasoning test_1119,"Since most of the labels are unrelated to a document, calculating the label-aware hyperbolic capsules for all the unrelated labels is redundant.",encoding based adaptive routing layer is used to efficiently decide the candidate labels for the document.,reasoning test_1120,"In addi-tion, NLP-CAP applies the non-linear squashing function for capsules in the Euclidean space, while HDR is designed for hyperbolic capsules, which take advantage of the representation capacity of the hyperbolic space.",hYPERCAPS outperforms NLP-CAP as expected.,reasoning test_1121,"However, the interpretability is very important in the CDS to explain how the diagnosis is generated by machines.",we propose the Bayesian network ensembles on top of the output of ECNN to explicitly infer disease with PGMs.,reasoning test_1122,"The top 100,000 frequent segmented words consist of the word vocabulary in the embedding layer of ECNN.","the size of the embedding layer is (100000, 100).",reasoning test_1123,"Since the feature representation of pairs in the same row or column tends to be closer, we believe that pairs in the same row and column with the current pair have a greater impact on the current pair.","we propose the cross-road 2D transformer, in which the multi-head 2D self-attention mechanism is replaced by the cross-road 2D selfattention, and the other parts remain the same.",reasoning test_1124,The data in different domains usually shares certain background knowledge that can possibly be transferred from the source domain to the target domain.,we leverage external knowledge as a bridge between the source and target domains.,reasoning test_1125,"Based on our empirical observation, capturing the multi-hop semantic correlation is one of the most important parts for the overall performance of SEKT.",we also investigate the impact of the number of hops used in GCN.,reasoning test_1126,"During training, we greedily find the 1-best head for each word without tree constraints.",the processing speed is faster than the evaluation phase.,reasoning test_1127,"On the one hand, despite bringing performance improvements over existing MNER methods, our UMT approach still fails to perform well on social media posts with unmatched text and images, as analyzed in Section 3.5.",our next step is to enhance UMT so as to dynamically filter out the potential noise from images.,reasoning test_1128,"Second, due to the global structure, the test documents are mandatory in training.","they are inherently transductive and have difficulty with inductive learning, in which one can easily obtain word embeddings for new documents with new structures and words using the trained model.",reasoning test_1129,"Despite that recent language encoders achieve promising performance, it is unclear if they perform equally well on text data with grammatical errors.",we synthesize grammatical errors on clean corpora to test the robustness of language encoders.,reasoning test_1130,We believe such absolute measurements to the significance of words may be playing a more crucial role (than attention weights) when understanding the attention mechanism.,"unlike many previous research efforts, we will instead focus on the understanding of attention scores in this work.",reasoning test_1131,"To determine whether a prefix x [1:i] is promising, we can estimate where is the minimum ratio of all sentences with prefixx is greater than a pre-defined threshold, all sentences with prefix x [1:i] should be rejected.",we do not need to waste time to continue sampling.,reasoning test_1132,"However, a richer latent space does not guarantee a better probability estimation result.","in this part, we delve deeper into whether the decoder signal matching mechanism helps improve probability estimation.",reasoning test_1133,"As it is shown in Table 5, our method is less likely to perturb some easily-modified semantics (e.g. numbers are edited to other ""forms"", but not different numbers), while search tends to generate semantically different tokens to achieve degradation.",our agent can lead to more insightful and plausible analyses for neural machine translation than search by gradient.,reasoning test_1134,"However, there are still some problems with machine translation in the documentlevel context (Laubli et al. , 2018).","more recent work (Jean et al., 2017;Wang et al., 2017;Tiedemann and Scherrer, 2017;Maruf and Haffari, 2018;Bawden et al., 2018;Voita et al., 2019a;Junczys-Dowmunt, 2019) is focusing on the document-level machine translation.",reasoning test_1135,The flat structure adopts a unified encoder that does not distinguish the context sentences and the source sentences.,we introduce the segment embedding to identify these two types of inputs.,reasoning test_1136,"Intuitively, the less the direction of accumulated gradients is moved by the gradients of a new minibatch, the more certainty there is about the gradient direction.","we propose that the magnitude of the angle fluctuation relates to the certainty of the model parameter optimization direction, and may therefore serve as a measure of optimization difficulty.",reasoning test_1137,"But after the direction of gradients has stabilized, accumulating more mini-batches seems useless as the gradient direction starts to fluctuate.","we suggest to compute dynamic and efficient batch sizes by accumulating gradients of mini-batches, while evaluating the gradient direction change with each new mini-batch, and stop accumulating more mini-batches and perform an optimization step when the gradient direction fluctuates.",reasoning test_1138,Encoders and decoders are (partially) shared between L 1 and L 2 .,l 1 and l 2 must use the same vocabulary.,reasoning test_1139,The LBUNMT model trained in the same language branch performed better than the single model because similar languages have a positive interaction during the training process as shown in Tables 2 and 3.,the distilled information of LBUNMT is used to guide the MUNMT model during backtranslation.,reasoning test_1140,"As the number of languages increases, the number of translation directions increases quadratically.",zero-shot translation accuracy is important to the MUNMT model.,reasoning test_1141,"Specifically, training with teacher forcing only exposes the model to gold history, while previous predictions during inference may be erroneous.","the model trained with teacher forcing may over-rely on previously predicted words, which would exacerbate error propagation.",reasoning test_1142,"Indeed, viral claims often come back after a while in social media, and politicians are known to repeat the same claims over and over again.","before spending hours fact-checking a claim manually, it is worth first making sure that nobody has done it already.",reasoning test_1143,Previous work has argued that BERT by itself does not yield good sentence representation.,"approaches such as sentence-BERT (Reimers and Gurevych, 2019) have been proposed, which are specifically trained to produce good sentence-level representations.",reasoning test_1144,"In general, there is a 1:1 correspondence, but in some cases an Input claim is mapped to multiple VerClaim claims in the database, and in other cases, multiple Input claims are matched to the same VerClaim claim.","the task in Section 3 reads as follows when instantiated to the PolitiFact dataset: given an Input claim, rank all 16,636 VerClaim claims, so that its matching VerClaim claims are ranked at the top.",reasoning test_1145,We treat the task as a ranking problem.,"we use ranking evaluation measures, namely mean reciprocal rank (MRR), Mean Average Precision (MAP), and MAP truncated to rank k (MAP@k).",reasoning test_1146,"Initially, we tried to fine-tune BERT (Devlin et al., 2019), but this did not work well, probably because we did not have enough data to perform the fine-tuning.","eventually we opted to use BERT (and variations thereof) as a sentence encoder, and to perform max-pooling on the penultimate layer to obtain a representation for an input piece of text.",reasoning test_1147,"For the purpose of comparison, we tried to filter out the text of the input tweet from the text of the article body before attempting the matching, but we still got unrealistically high results.",ultimately we decided to abandon these experiments.,reasoning test_1148,"Subsequently, larger values of λ reduced the BLEU scores, suggesting that excessive biased content word translation may be weak at translating function words","Therefore, we set the hyperparameter λ to 0.4 to control the loss of target content words in our experiments (Table 1).",reasoning test_1149,"Especially for the LEFT-ARC lt action, there is only about 0.43% in the total actions, turning out to be the most difficult action to learn given the relatively small training samples.","as shown in Figure 5(a), the accuracy for LEFT-ARC lt is 0, which drops the overall performance heavily.",reasoning test_1150,"Shown as Figure 3, based on late-fusion multimodal learning framework (Cambria et al., 2017; Zadeh et al., 2017), we add independent output units for three unimodal representations: text, audio, and vision",these unimodal representations not only participate in feature fusion but are used to generate their predictive outputs.,reasoning test_1151,It is relatively obvious that new models learn more distinctive unimodal representations compare to original models.,unimodal annotations can help the model to obtain more differentiated information and improve the complementarity between modalities.,reasoning test_1152,"Different from joint training, meta-transfer learning computes the firstorder optimization using the gradients from monolingual resources constrained to the code-switching validation set.","instead of learning one model that is able to generalize to all tasks, we focus on judiciously extracting useful information from the monolingual resources.",reasoning test_1153,"In multimodal context, sarcasm is no longer a pure linguistic phenomenon, and due to the nature of social media short text, the opposite is more often manifested via cross-modality expressions.",traditional text-based methods are insufficient to detect multimodal sarcasm.,reasoning test_1154,"For example, in Fig.1b, we can not reason about sarcasm intention simply from the short text 'Perfect flying weather in April' until we notice the downpour outside the airplane window in the attached image.","compared to text-based methods, the essential research issue in multimodal sarcasm detection is the reasoning of cross-modality contrast in the associated situation.",reasoning test_1155,Our work focus on the multimodal sarcasm detection using image and text modalities.,we compare our model with the only two existing related models using the same modalities.,reasoning test_1156,The MLP+CNN model simply takes the multimodal sarcasm detection as a general multimodal classification task via directly concatenating multimodal features for classification.,it gets the worst performance.,reasoning test_1157,"CNN and BiLSTM just treat the sarcasm detection task as a text classification task, ignoring the contextual contrast information.","their performances are worse than MIARN, which focuses on textual context to model the contrast information between individual words and phrases.",reasoning test_1158,"After removing the D-Net, the model only accepts the text and ANPs inputs.",we further incorporate image information via directly concatenating image encoding in the final fusion layer (see row 2).,reasoning test_1159,"In Fig.4b, our model pays more attention to the textual phrase 'these lovely books' with stupid sign, strange sign, and bad sign ANPs which refer to the emoji in the attached image.",it is easy for our model to detect the sarcasm intention that the books are NOT 'lovely' at all.,reasoning test_1160,"In multimodal sarcastic tweets, we expect our model to focus more on the opposite between different modality information.","we reinforce discrepancy between image and text, and on the contrary, weaken their commonality.",reasoning test_1161,"We have already extracted multiple ANPs as the visual semantic information, which is beneficial to model multi-view associations between image and text according to different views of ANPs.",we propose the ANP-aware cross-modality attention layer to align textual words and ANPs via utilizing each ANP to query each textual word and computing their pertinence.,reasoning test_1162,"However, the attention weights are difficult to learn, and the attention weights of SimulSpeech model are more difficult to learn than that of the simultaneous ASR and NMT models since SimulSpeech is much more challenging.","we propose to distill the knowledge from the multiplication of the attention weights of the simultaneous ASR and NMT, as shown in Figure 2b and Figure 3.",reasoning test_1163,We add attention-level knowledge distillation (Row 5 vs. Row 3) to the model and find that the accuracy can also be improved.,"we combine all the techniques together (Row 6, SimulSpeech) and obtain the best BLEU scores across different wait-k, which demonstrates the effectiveness of all techniques we proposed for the training of Simul-Speech.",reasoning test_1164,"As shown in Figure 5, simultaneous ASR model makes a mistake which further affects the accuracy of downstream simultaneous NMT model, while SimulSpeech is not suffered by this problem.",simulspeech outperforms cascaded models.,reasoning test_1165,Our proposed approach aims to exploit speech signal to word encoder learnt using an architecture similar to Speech2Vec as lower level dynamic word representations for the utterance classifier.,our system never actually needs to know what word it is but only word segmentation information.,reasoning test_1166,We found there was not a big difference in encoder output quality with higher dimensions.,"we use a 50 dimensional LSTM cell, thus the resulting encoder output becomes 100 (Bidirectional last hidden states) + 100 (cell state) = 200 dimensions.",reasoning test_1167,One challenge is that SSWE and Speech2Vec generally needs large amount of transcribed data to learn high quality word embeddings.,"we first train SSWE on a general speech corpus (here, LibreSpeech (Libre)) before fine-tuning it on our classifier training data (results with * show this experiment).",reasoning test_1168,We hypothesize that it can be due to the fact that our behavior code prediction data was split to minimize the speaker overlap.,it becomes easier to overfit when we fine-tune it on some speaker-related properties instead of generalizing for behaviour code prediction task.,reasoning test_1169,"SeqGFMN has a stable training because it does not concurrently train a discriminator, which in principle could easily learn to distinguish between one-hot and soft one-hot representations.",we can use soft one-hot representations that the generator outputs during training without using the Gumbel softmax or REINFORCE algorithm as needed in GANs for text.,reasoning test_1170,A natural task that fits into this problem formulation is commonsense reasoning.,it will be the main focus of the present paper.,reasoning test_1171,"For example, on HellaSwag, the target hypothesis mode is only 8% better than the hypothesis only mode (58.8% versus 50.8%), which confirms that on this setting our zero-shot method is mainly taking advantage of the bias in the hypotheses.",we refrain from doing more zero-shot experiments on both datasets.,reasoning test_1172,"The time complexity of function f 3 is O(k 2 d) because there are k 3 dot product terms r x , h y , t w in total.","the scoring function f 3 needs k 3 times of dot product to compute the score of a triple (h, r, t).",reasoning test_1173,"For the space complexity, the dimension of entity and relation embeddings is d, and there are no other parameters in our SEEK framework.",the space complexity of SEEK is O(d).,reasoning test_1174,"And (5) another deep programming logic (DPL) method, GPT+DPL , is complicated, and the source code is not provided.",we directly used the results from the original paper and did not evaluate it on BERT.,reasoning test_1175,It can be seen that BERT-HA+STM outperformed the base model BERT-HA by a large margin in terms of all the metrics.,"the evidence extractor augmented with STM pro-vided more evidential information for the answer predictor, which may explain the improvements of BERT-HA+STM on the two datasets.",reasoning test_1176,"Otherwise, 0 would be assigned.",we compute the adjacency matrix A qcomp for graph G qcomp and A qcell for G qcell .,reasoning test_1177,"Though a significant amount of parameters are introduced for incorporating phrase representation into the Transformer model, our approach (""+Max+Attn+TA"") improved the performance of the Transformer Base model by +1.29 BLEU on the WMT 14 En-De news task, and the proposed Transformer model with phrase representation still performs competitively compared to the Transformer Big model with only about half the number of parameters and 1/3 of the training steps.","we suggest our improvements are not only because of introducing parameters, but also due to the modeling and utilization of phrase representation.",reasoning test_1178,The tweets collected with these hashtags may contain reported sexist acts towards both men and women.,"we collected around 205, 000 tweets, among which about 70, 000 contain the specific hashtags.",reasoning test_1179,To this date there have been no proposals for a dynamic oracle for CCG parsing with F1 metric over CCG dependency structures and it is not even clear if there is a polynomial solution to this problem.,this is not an option that we can use.,reasoning test_1180,"For example, texts containing some demographic identity-terms (e.g., ""gay"", ""black"") are more likely to be abusive in existing abusive language detection datasets.","models trained with these datasets may consider sentences like ""She makes me happy to be gay"" as abusive simply because of the word ""gay.""",reasoning test_1181,"Because of such a phenomenon, models trained with the dataset may capture the unintended biases and perform differently for texts containing various identity-terms.",predictions of models may discriminate against some demographic minority groups.,reasoning test_1182,"However, ""perform similarly"" is indeed hard to define.",we pay more attention to some criteria defined on demographic groups.,reasoning test_1183,Attention flow can indicate a set of input tokens that are important for the final decision.,we do not get sharp distinctions among them.,reasoning test_1184,"The lack of explicit claims by research may cause misinformation to potential users of the technology, who are not versed in its inner workings.",clear distinction between these terms is critical.,reasoning test_1185,"On average, the Diversity LSTM model provides 53.52 % (relative) more attention to rationales than the vanilla LSTM across the 8 Text classification datasets.",the attention weights in the Diversity LSTM are able to better indicate words that are important for making predictions.,reasoning test_1186,"While the system is still uncertain, the users often receive inappropriate (e.g., too hard or too easy) exercises.","they get the impression that the system does not work properly, which is especially harmful during the inception phase of an application, as the community opinion largely defines its success.",reasoning test_1187,"Less motivated learners or learners who suffer from distractions, interruptions, or frustration, however, may show different paces in their learning speed or even deteriorate in their proficiency.",we study four prototypical types of learner behavior: -Static learners (STAT) do not improve their skills over the course of our experiments.,reasoning test_1188,"In GEC, it is important to evaluate the model with multiple datasets .","we used GEC evaluation data such as W&I-test, CoNLL-2014 (Ng et al., 2014), FCE-test and JFLEG (Napoles et al., 2017). ",reasoning test_1189,"Following Ye et al. (2018), we regard the paragraph start with “our court identified that” and end with “the above facts” as the fact description. Burges et al. (2005) shows that training on ties makes little difference","we could consider only defendant pairs (A, B) such that A plays a more important role than B and label it 1.",reasoning test_1190,"For humans, the most natural way to communicate is by natural language.",future intelligent systems must be programmable in everyday language.,reasoning test_1191,"Utterances that were labeled as non-teaching in the first stage also run through the third stage, except for signature synthesis.",we only construct scripts for this type of utterances.,reasoning test_1192,"Most encouragingly, the average rank of the correct element is near 1.",our scoring mechanism succeeds in placing the right elements on top of the list.,reasoning test_1193,In short chunks each word is important.,unmapped words are strongly penalized.,reasoning test_1194,"However, the vast majority of current datasets do not include the preceding comments in a conversation and such context was not shown to the annotators who provided the gold toxicity labels.",systems trained on these datasets ignore the conversational context.,reasoning test_1195,"To investigate whether adding context can benefit toxicity detection classifiers, we could not use CAT-SMALL, because its 250 comments are too few to effectively train a classifier.",we proceeded with the development of a larger dataset.,reasoning test_1196,TAPAS predicts a minimal program by selecting a subset of the table cells and a possible aggregation operation to be executed on top of them.,"tAPAS can learn operations from natural language, without the need to specify them in some formalism.",reasoning test_1197,All of the end task datasets we experiment with only contain horizontal tables with a header row with column names.,we only extract Wiki tables of this form using the tag to identify headers.,reasoning test_1198,"A scalar answer s that also appears in the table (thus C 6= ∅) is ambiguous, as in some cases the question implies aggregation (question 3 in Figure 3), while in other cases a table cell should be predicted (question 4 in Figure 3).",in this case we dynamically let the model choose the supervision (cell selection or scalar answer) according to its current policy.,reasoning test_1199,"The Krippendorff’s α of the original 3,600 annotations of response appropriateness is 0.431, which is considered not good according to the interpretation of the number in Table 5",we decided to remove the outliers to improve the inter-annotator agreement.,reasoning test_1200,"Moreover, multi-modal input helps the model to understand the intent and the sentiment of the speaker with more certainty.","in the context of a dialogue, multi-modal data such as video (acoustic + visual) along with text helps to understand the sentiment and emotion of the speaker, and in turn, helps to detect sarcasm in the conversation.",reasoning test_1201,"Whereas, in the case of visual modality, it majorly contains the image of the speaker along with sentiment and emotion information.",visual will not have a similar kind of problem as acoustic.,reasoning test_1202,"As we described in Section 2, accurately locating the previous statements about the claim is a very challenging problem.","instead of directly searching for a possible previous statement, we search for related context, where the source are describing a statement related to the claim.",reasoning test_1203,"However, an article can include multiple different statements about the same claim with different opinions, and multiple articles can refer to the same statement about the claim from a common source.","the majority vote by opinions in article level is not good enough, since it suffers from (1) opinions which are too coarse-grained and (2) overcounting the opinions from the same source, which is also known as collusion or dependency of sources problem in truth finding (Pochampally et al., 2014).",reasoning test_1204,"MPQA dataset is originally developed for identifying sources for the given opinion, and the opinion sometimes can be a noun phrase or an entity, while in our problem we are to extract sources for claims.","we only leave the opinions which are sentences as the query claim, and perform 10-fold cross validation to evaluate the performance of our models and the baselines.",reasoning test_1205,The 'quality' of extracted rationales will depend on their intended use.,we propose an initial set of metrics to evaluate rationales that are meant to measure different varieties of 'interpretability'.,reasoning test_1206,"When deploying a CliniRC system to a new environment (e.g., a new set of clinical records, a new hospital, etc.), it is infeasible to create new QA pairs for training every time",an ideal CliniRC system is able to generalize to unseen documents and questions after being fully trained.,reasoning test_1207,We draw several observation from the evidence selection results: (1) AIR vs. unsupervised methods -AIR outperforms all the unsupervised baselines and previous works in both MultiRC (row 9-15 vs. row 23 in table 1) and QASC(rows 0-6 vs. row 18).,highlighting strengths of AIR over the standard IR baselines.,reasoning test_1208,"We cannot use Webster et al.’s GAP dataset directly, because their data is constrained that the “gender” of the two possible antecedents is “the same”7 ; for us, we are specifically interested in how annotators make decisions even when additional gender information is available.","we construct a dataset called Maybe Ambiguous Pronoun (MAP) follow-ing Webster et al.'s approach, but we do not restrict the two names to match gender.",reasoning test_1209,"As it is shown in Fig. 3, there might be multiple changes for each output words during the translation, and we only start to calculate the latency for this word once it agrees with the final results.",it is necessary to locate the last change for each word.,reasoning test_1210,"Many speculate that these representations encode a continuous analogue of discrete linguistic properties, e.g., part-of-speech tags, due to the networks’ impressive performance on many NLP tasks (Belinkov et al., 2017).","of this speculation, one common thread of research focuses on the construction of probes, i.e., supervised models that are trained to extract the linguistic properties directly Conneau et al., 2018;Peters et al., 2018b;Zhang and Bowman, 2018;Naik et al., 2018;Tenney et al., 2019).",reasoning test_1211,"Since TurkCorpus was adopted as the standard dataset for evaluating SS models, several system outputs on this data are already publicly available (Zhang and Lapata, 2017;Zhao et al., 2018;Martin et al., 2020).","we can now assess the capabilities of these and other systems in scenarios with varying simplification expectations: lexical paraphrasing with TurkCorpus, sentence splitting with HSplit, and multiple transformations with ASSET.",reasoning test_1212,"The two images illustrate different NC concepts (i.e., HIGH JUMP and POLE VAULT) which are different configurations of the same elementary objects (i.e., PERSON, ROD, BLEACHERS).","nC concepts require complex image understanding, integrating a fair amount of common sense knowledge.",reasoning test_1213,"For each label, these weights are given by its learned correlation with all the other labels.","the prediction score of each label is affected by the prediction score of the other labels, based on the correlation between label pairs.",reasoning test_1214,"As a side-effect, we observe that older interpretability methods for static embeddings-while more mature than those available for their dynamic counterparts-are underutilized in studying newer contextualized representations.",we introduce simple and fully general methods for converting from contextualized representations to static lookup-table embeddings which we apply to 5 popular pretrained models and 9 sets of pretrained weights.,reasoning test_1215,"Through a human study, we show that our manipulated attention-based explanations deceive people into thinking that predictions from a model biased against gender minorities do not rely on the gender.",our results cast doubt on attention's reliability as a tool for auditing algorithms in the context of fairness and accountability.,reasoning test_1216,"Thus, the models (trained on unanonymized data) make use of gender indicators to obtain a higher task performance.",we consider gender indicators as impermissible tokens for this task.,reasoning test_1217,"Since the feasible space is the same for both kinds of constraints, the performance difference is due to the randomness of the ILP solver picking different solutions with the same objective value.",the entity and relation experiments in this section demonstrate that our approach can recover the designed constraints and provide a way of interpreting these constraints.,reasoning test_1218,"In contrast, our method for learning constraints uses general constraint features, and does not rely on domain knowledge.",our method is suited to tasks where little is known about the underlying domain.,reasoning test_1219,Such a z' is a negative example for the constraint learning task because z' has a lower objective value than z.,it violates at least one of the constraints in Eq.,reasoning test_1220,"This feature looks at a pair of entities and focuses on the two relation labels between them, one in each direction.","our running example will give us two positive examples with features (OrgBasedIn, NoRel) and (NoRel,OrgBasedIn).",reasoning test_1221,"Despite the high correlation, we also find that the estimated FAR scores may vary in range compared to the ground-truth FAR.",we further use the estimations of different sentence regression approaches to train a linear regression model to fit the ground-truth FAR (denoted as AutoFAR).,reasoning test_1222,"On the other hand, the agreement between rhetorical relations tends to be lower and more ambiguous.",we do not encode rhetorical relations explicitly in our model.,reasoning test_1223,"Equipped with the metrics for abstractiveness above, we want to further understand how abstractive the generated summaries are, and whether the amount of abstractiveness is a result of the training data or the model.",we compute abstractiveness scores for both the reference summaries and summaries generated from a diverse set of models on two datasets.,reasoning test_1224,Our analysis above shows that the number of unfaithful sentences increases significantly as more abstractive summaries are generated.,"the key challenge to faithfulness evaluation is to verify highly abstractive sentences against the source document, where surface similarity matching would fail.",reasoning test_1225,"Given the right corpus, we argue that a language model's probability can be modified into a Fluency Score.",we adapt a language model into the Fluency Model.,reasoning test_1226,"Speakers distill their past experience of language use into what we call ""meaning"" here, and produce new attempts at using language based on this; this attempt is successful if the listener correctly deduces the speaker's communicative intent.","standing meanings evolve over time as speakers can different experiences (e.g. McConnellGinet, 1984), and a reflection of such change can be observed in their changing textual distribution (e.g. Herbelot et al., 2012; Hamilton et al., 2016).",reasoning test_1227,"Nonetheless, given the immense volume of scientific literature, the relative ease with which one can track citations using services such as Google Scholar (GS), and given the lack of other easily applicable and effec-tive metrics, citation analysis is an imperfect but useful window into research impact.",citation metrics are often a factor when making decisions about funding research and hiring scientists.,reasoning test_1228,Citation analysis can also be used to gauge the influence of outside fields on one's field and the influence of one's field on other fields.,it can be used to determine the relationship of a field with the wider academic community.,reasoning test_1229,"However, scientometric researchers estimated that it included about 389 million documents in January 2018 (Gusenbauer, 2019)-making it the world's largest source of academic information.","there is growing interest in the use of Google Scholar information to draw inferences about scholarly research in general (Howland, 2010; Orduna-Malea et al. , 2014; Khabsa and Giles, 2014; Mingers and Leydesdorff, 2015; Mart´ın-Mart´ın et al., 2018) and on scholarly impact in particular (Priem and Hemminger, 2010; Yogatama et al., 2011; Bulaitis, 2017; Ravenscroft et al., 2017; Bos and Nitza, 2019; Ioannidis et al., 2019).",reasoning test_1230,"However, algorithms also provide systematic ways to reduce bias, and some see the mitigation of bias in algorithm decisions as a potential opportunity to move the needle positively (Kleinberg et al., 2018).","we can apply frameworks of contemporaries in human behavior to machines (Rahwan et al., 2019), and perhaps benefit from a more scalable experimentation process.",reasoning test_1231,The language of demographic groups systematically differs from each other for syntactic attributes.,"models trained on samples whose demographic composition (e.g., age and ethnicity) differs from the target perform significantly worse.",reasoning test_1232,"In contrast, the per-worker PEA scores for our annotations are shifted towards the right, indicating better agreement than the random baseline.","we interpret our annotations as showing ""moderate agreement"" under the PEA metric.",reasoning test_1233,"In most cases, Equation 2 is not equal to the ordinary conditional distribution P (Y | T = t) since the latter is simply filtering to the sub-population and the former is changing the underlying data distribution via intervention.","for observational studies that lack intervention, one needs an identification strategy in order to represent P (Y | do(T = t)) in terms of distributions of observed variables.",reasoning test_1234,"For instance, studies have shown that pre-processing decisions dramatically change topic models (Denny and Spirling, 2018;Schofield et al., 2017); embeddings are sensitive to hyperparameter tuning (Levy et al., 2015) and the construction of the training corpus (Antoniak and Mimno, 2018); and fine-tuned language model performance is sensitive to random restarts (Phang et al., 2018).",reporting sensitivity analysis of the causal effects from these decisions seems crucial: how robust are the results to variations in modeling specifications?,reasoning test_1235,"However, as Gentzel et al. (2019) discuss, synthetic data has no “unknown unknowns” and many researcher degrees of freedom, which limits their effectiveness.","we encourage researchers to evaluate with constructed observational studies or semi-synthetic datasets, although measuring latent confounders from text increases the difficulty of creating realistic datasets that can be used for empirical evaluation of causal methods.",reasoning test_1236,"Our key insight in this paper is that the context or relations through which specific information is propagated among different players in the legislative process (e.g., money donors and legislators), can be leveraged to further improve the performance.","we build a shared relational architecture that models the text of a bill and its context into a graph; Our model captures the behavior of individual legislators, language of bills, and influence of contributions on the decision to identify demographic cleavages.",reasoning test_1237,"A close look reveals that the legislative process cannot be captured in a simple graph as there can be multiple relations between a pair of nodes (e.g., sponsorship and vote between legislators and bills), and the graph consists of several nodes types with different attributes and labels (e.g., bills with competitive labels).","we model the process using a heterogeneous multi-relational graph, as follows: Node attributes: The nodes in our proposed legislative graph come with a rich set of features and information: (1) Bill nodes contain title, description, and full text of the house and senate state bills.",reasoning test_1238,"In NLP, dropped pronouns can cause loss of important information, such as the subject or object of the central predicate in a sentence, introducing ambiguity to applications such as machine translation (Nakaiwa and Shirai, 1996; Wang et al., 2016; Takeno et al., 2016), question answering (Choi et al., 2018; Reddy et al., 2019; Sun et al., 2019; Chen and Choi, 2016) and dialogue understanding (Chen et al., 2017; Rolih, 2018).","zero pronouns have recently received much research attention (Liu et al., 2017; Yin et al., 2018a,b)",reasoning test_1239,"Compared to semantic frames (Fillmore and Baker, 2001), the meanings projected by pragmatic frames are richer, and thus cannot be easily formalized using only categorical labels.","as illustrated in Figure 1, our formalism combines hierarchical categories of biased implications such as intent and offensiveness with implicatures described in free-form text such as groups referenced and implied statements.",reasoning test_1240,"In the Post round, users are given the same data, but they are also equipped with explanations of the model predictions for the original inputs.",any improvement in performance is attributable to the addition of explanations.,reasoning test_1241,"As explained, given end-task supervision only, modules may not act as intended, since their parameters are only trained for minimizing the end-task loss.",a straightforward way to improve interpretability is to train modules with additional atomic-task supervision.,reasoning test_1242,"Note that since a single proposed bounding box can align with multiple annotated bounding boxes, it is possible for the numerator to exceed the denominator.","these two choices for a common numerator have issues, and we avoid these issues by defining the numerators of precision and recall separately.",reasoning test_1243,"Specially, DST-SC is designed with a slot connecting mechanism to establish the connection between the target slot and its source slot explicitly.",it can take advantage of the source slot value directly instead of reasoning from preceding turns.,reasoning test_1244,"As claimed in Section 1, connecting the target slot with its source slot helps to decrease the reasoning difficulty.",we enhance the copyaugmented encoder-decoder model with a slot connecting mechanism to model slot correlations directly.,reasoning test_1245,Their applications are usually limited in a single domain.,"several open vocabulary approaches in generative fashion (Xu and Hu, 2018;Wu et al., 2019;Ren et al., 2019) are proposed to handle unlimited slot values in more complicated dialogues.",reasoning test_1246,A wide range of NLP tasks have greatly benefited from the pre-trained BERT model.,we also finetune the pre-trained BERT-Large model on our task through sequence pair classification schema.,reasoning test_1247,"In Example 4, BERT-ft prefers A but the answer is C. The reason why BERT-ft chooses A may be that ""enjoy life"" happens in the context, but summarizing the next sentence is necessary to achieve the correct answer.",it is necessary to improve the ability of BERT to represent meaning at the sentence level beyond representing individual words in context.,reasoning test_1248,A potential artifact type for our task is whether we could detect distractors without passages.,"we finetune BERT-Large as a binary classifier, the input of which is just distractors and other correct candidates.",reasoning test_1249,"With |B| = 5 blanks and |C| = 7 candidates, the size of answer space, |A|, is number of permutations |B| objects taken |C| at a time, i.e., P(7, 5) = 2520.",the probability of answering all blanks correctly is 1/2520 = 0.03% What are the chances of getting answers partially correct?,reasoning test_1250,"While fine-tuning for span-based QA, every utterance as well as the question are separated encoded and multi-head attentions and additional transformers are built on the token and utterance embeddings respectively to provide a more comprehensive view of the dialogue to the QA model.",our model achieves a new state-of-the-art result on a span-based QA task where the evidence documents are multiparty dialogue.,reasoning test_1251,"One important reason behind this is that, due to the vague definition of commonsense knowledge, we are not clear about what the essential knowledge types are and thus we are unclear about how to represent, acquire, and use them.",we can only treat commonsense knowledge as a black box and try to learn it from limited training data.,reasoning test_1252,"If at least four annotators think the reason is plausible, we will accept that reason.",we identify 992 valid reasons.,reasoning test_1253,Note that each reason may contain inference over multiple knowledge types.,"for each reason, we invite five different annotators to provide annotations.",reasoning test_1254,"Each annotators are provided with detailed instruction of the job, descriptions of each candidate category, and examples for the category.","we collect 4,960 annotations.",reasoning test_1255,"Besides the above analysis, we are also interested in how different models perform on questions that require complex reasoning types.",we divide all WSC questions based on how many knowledge types are required to solve these questions and show the result in Table 5.,reasoning test_1256,"One possible reason is that even though the designers of WSC are trying to avoid any statistical correlation between the answer and the trigger word, such statistical correlation still exists.",pre-trained language representation models can learn such correlation from large-scale training corpus and thus can answer WSC questions without fully understanding the reasons behind.,reasoning test_1257,"Research in Cyber Argumentation has shown that incorporating both stance polarity and intensity information into online discussions improves the analysis of discussions and the various phenomena that arise during a debate, including opinion polarization (Sirrianni et al., 2018), and identifying outlier opinions (Arvapally et al., 2017), compared to using stance polarity alone.","automatically identifying both the post's stance polarity and intensity, allows these powerful analytical models to be applied to unstructured debate data from platforms such as Twitter, Facebook, Wikipedia, comment threads, and online forums.",reasoning test_1258,The difference between strong opposition and weak opposition is often expressed through subtle word choices and conversational behaviors.,"to accurately predict agreement intensity, a learned model must understand the nuances between word choices in the context of the discussion.",reasoning test_1259,"The authors were instructed on how to annotate their posts, but the annotations themselves were left to the post's author's discretion.",including author information into our models would likely improve the stance polarity and intensity prediction results.,reasoning test_1260,Our full model outperforms state-ofthe-art unsupervised fine-tuning approaches and partially supervised approaches using crosslingual resources in 8/11 tasks.,our results provide a strong lower bound performance on what future semi-supervised or supervised approaches are expected to produce.,reasoning test_1261,"The maximization is guaranteed to converge to the unique maximum likelihood estimator in finite steps under the assumption that in every possible partition of the items into two nonempty subsets, some subject in the second set beats some subject in the first set at least once (Hunter, 2004).","a pairwise comparison experiment is restricted in two ways: (i) The matrix formed by the comparisons must construct a strongly connected graph; (ii) The comparisons between the partitions cannot all be won by subjects from the same group, i.e., no item has losses or wins exclusively.",reasoning test_1262,"However, this strategy suffers from the major drawback that for some step sizes, the resulting graph has multiple unconnected components, thus violating the restriction that the comparison matrix must form a strongly connected graph.","complex combinations of different step sizes are needed, resulting in needlessly complicated experimental setups.",reasoning test_1263,"By example, going from x = 1, k = 16 to x = 1, k = 4 ends up at the same number of comparisons as x = 2, k = 8, but has a slightly higher ranking accuracy.",it is more economical to increase the sampling rate until the required accuracy is met than collecting multiple judgments.,reasoning test_1264,"However, the specific choice of k depends on scale and domain of the data as well as trustworthiness of comparisons.","we refrain from making a general suggestion for the choice of k. Thus, if the model is to be adapted to drastically different domains or item counts, exploratory studies are advised to estimate the quality tradeoff for a specific use case.",reasoning test_1265,"While this could hint at a data bias, with crowd workers just voting for longer texts in the comparison but not actually reading all of it, the effect is much less pronounced when only measuring the correlation in texts longer than 100 words (n = 869).",much of the pronounced effect can be explained by short texts receiving justified low scores rather than longer texts being voted higher regardless of content.,reasoning test_1266,"The first step of the PCA accounts for 73% of the data variance, and is equally influenced by all three quality dimensions.",evidence is given towards the hypothesis.,reasoning test_1267,"To account for this, the score distributions are equally shifted into the positive domain.",a standardized scalar value for overall argument quality can be calculated.,reasoning test_1268,"While this paradigm works to a certain extent, it usually retrieves knowledge facts only based on the entity word itself, without considering the specific dialogue context.",the introduction of the context-irrelevant knowledge facts can impact the quality of generations.,reasoning test_1269,"To summarize, Felicitous Fact mechanism can alleviate the first two issues, and the next two techniques solve the last issue.","our approach can improve the utilization rate of knowledge graphs, as well as can promote the diversity and informativeness of the generated responses.",reasoning test_1270,"In the field of natural language processing (NLP) which widely employs DNNs, practical systems such as spam filtering (Stringhini et al., 2010) and malware detection (Kolter and Maloof, 2006) have been broadly used, but at the same time the concerns about their security are growing.",the research on textual adversarial attacks becomes increasingly important.,reasoning test_1271,"Instead, the successful uses of neural networks in computational linguistics have replaced specific pieces of computational-linguistic models with new neural network architectures which bring together continuous vector spaces with structured representations in ways which are novel for both machine learning and computational linguistics.","the great progress which we have made through the application of neural networks to natural language processing should not be viewed as a conquest, but as a compromise.",reasoning test_1272,"With parameters shared across entities and sensitive to these properties and relations, learned rules are parameterised in terms of these structures.","transformer is a deep learning architecture with the kind of generalisation ability required to exhibit systematicity, as in (Fodor and Pylyshyn, 1988).",reasoning test_1273,"Since most of the common neighbors would be popular entities, they will be neighbors of many other entities.",it is still challenging to align such entities.,reasoning test_1274,"However, not all one-hop neighbors contribute positively to characterizing the target entity.",considering all of them without careful selection can introduce noise and degrade the performance.,reasoning test_1275,"On another scenario, recent analysis also reveals that state-of-the-art sequential neural language models still fail to learn certain long-range syntactic dependencies (Kuncoro et al., 2018).",it is an interesting problem to explore the relation between language models and syntax and investigate whether syntax can be integrated to enhance neural language models.,reasoning test_1276,Their differences are such that they can not be easily collapsed into a single meta-tag.,"we do not penalize the model for producing any variation of equally valid analyses given the surface form, and for each model we adjust the evaluation for syncretism in a post-processing step.",reasoning test_1277,Chinese is an ideographic language and lacks word delimiters between words in written sentences.,chinese word segmentation (cWS) is often regarded as a prerequisite to downstream tasks in chinese natural language processing.,reasoning test_1278,Source domain data and target domain data generally have different distributions.,models built on source domain data tend to degrade performance when they are applied to target domain data.,reasoning test_1279,Chinese word segmentation is typically formalized as a sequence tagging problem.,"traditional machine learning models such as Hidden Markov Models (HMMs) and Conditional Random Fields (CRFs) are widely employed for CWS in the early stage (Wong and Chan, 1996;Gao et al., 2005;Zhao et al., 2010).",reasoning test_1280,"Furthermore, the system outputs pseudo-tags, and the mapping from pseudo-tags to paradigm slots is unknown.","we propose to use best-match accuracy (BMAcc), the best accuracy among all mappings from pseudo-tags to paradigm slots, for evaluation.",reasoning test_1281,Hiring crowd-sourcing workers to perform these annotations is very costly.,"we propose automated data augmentation methods to expand existing well-annotated dialog datasets, and thereby train better dialog systems.",reasoning test_1282,"In other words, a paraphrased dialog utterance needs to serve the same function as the original utterance under the same dialog context.",we propose to construct dialog paraphrases that consider dialog context in order to improve dialog generation quality.,reasoning test_1283,"These patterns can be seen as the ""template"" to produce the questions.",we can use the patterns as the prior to regularize the QG model and obtain better results accordingly.,reasoning test_1284,"These methods often need a lot of labeled data for training, but the data is expensive to obtain.",we are inspired to apply our generated results to enrich the training set for the task of MRC-QA.,reasoning test_1285,Slightly modified queries such as iPhone X charger and case for iPhone X refer to different products.,it is hard for distributed representations to capture the nuances.,reasoning test_1286,"BLEU scores do not indicate whether or not a generated queries is a ""realistic"" modification of the original query.",we also had 2500 generated pairs annotated by human experts who were specifically trained to decide if a query-item pair is matched or not,reasoning test_1287,"To account for these differences between query strings and item titles, we separately train word embeddings using word2vec (Mikolov et al., 2013) on anonymized query logs and item titles.","the same word can have two embeddings, one for the query and one for the title.",reasoning test_1288,"Then, the evaluation of model uncertainty is conducted on D U to mirror the confidence over the current curriculum.","our approach reserves the efficiency in CL, in the meanwhile, guiding the duration of each curriculum in a self-adaptive fashion.",reasoning test_1289,"This phenomenon reveals that the most ""simple"" and ""complex"" sentences quantified by different measures are relatively similar, and the main diversity lies in those sentences of which the difficulties hardly to be distinguished.",we argue that the improvements of the proposed method may mainly contribute by the differences in these two steps.,reasoning test_1290,We noticed that the Spanish tokenizer sometimes merges multi-word expressions into a single token joined with underscores for contiguous words.,some tokens cannot be aligned with the corresponding entity annotations.,reasoning test_1291,We restrict our analysis to Spanish since the data is labeled with both de-identification and concept information (see Section 4.1).,we can also investigate the difference between gold and predicted de-identification labels.,reasoning test_1292,Semantic matching operations between two mentions (and their contexts) are performed only at the output layer and are relatively superficial.,"it is hard for their models to capture all the lexical, semantic and syntactic cues in the context.",reasoning test_1293,"Moreover, there can be multiple dialogue acts mentioned in a single dialogue turn, which requires the model to attend to different acts for different sub-sequences.","a global vector is unable to capture the inter-relationships among acts, nor is it flexible for response generation especially when more than one act is mentioned.",reasoning test_1294,"Being a span enumeration type model, DYGIE++ only works on paragraph level texts and extracts relations between mentions in the same sentence only.",we subdivide SCIREX documents into sections and formulate each section as a single training example.,reasoning test_1295,The target of our framework is to conduct the formats controlled text generation.,the indicating symbols for format and rhyme as well as the sentence integrity are designed based on the target output sequence.,reasoning test_1296,But the Rhyme accuracy and the sentence integrity will drop simultaneously.,in the experiments we let k = 32 to obtain a trade-off between the diversity and the general quality.,reasoning test_1297,"Since the gradients from optimizing the functional task are sent all the way into the base captioning model, this causes catastrophic forgetting of the core knowledge of language, leading to language drift.","we use a language regularizer term in the form of Kullback-Leibler divergence between pre-trained and fine-tuned language modeling distributions (Havrylov and Titov, 2017).",reasoning test_1298,"We can observe that the larger the validation loss, the lower the BLEU score.",the validation loss can be a good performance proxy.,reasoning test_1299,"Page's argument is that the original student is not going to be much worse off with a com-puter than with an (average) human reader, because originality is a subjective construct.","once research uncovers objective and measurable aspects of ""original"" writing, relevant features can be added into an AWE system; finding such aspects, as well as measuring them, is still work in progress.",reasoning test_1300,"In particular, a story entails a highly structured network of relations (timelines, causality, etc.).",stories do exercise abilities beyond simple factoid extraction.,reasoning test_1301,The ones canonicalized by the guidelines and by annotators following them may not always be the most useful.,it may prove beneficial to appeal directly to human intuition about what understanding entails.,reasoning test_1302,We expect that introducing such importance information for the words in the deep learning models might lead to improved performance for RE.,"in this work, we propose to obtain an importance score for each word in the sentences from the dependency trees (called the syntax-based importance scores).",reasoning test_1303,"This dataset does not provide training, development, or test splits due to the small number of samples.",we run 5-fold cross validations and report the average scores.,reasoning test_1304,"First, we notice that the class name generation goal is similar to the hypernymy detection task which aims to find a general hypernym (e.g., ""mammal"") for a given specific hyponym (e.g., ""panda"").","we leverage the six Hearst patterns (Hearst, 1992), widely used for hypernymy detection, to construct the class-probing query.",reasoning test_1305,"Most state-of-the-art diacritic restoration models are built on character level information which helps generalize the model to unseen data, but presumably lose useful information at the word level.","to compensate for this loss, we investigate the use of multi-task learning to jointly optimize diacritic restoration with related NLP problems namely word segmentation, part-of-speech tagging, and syntactic diacritization.",reasoning test_1306,"Despite leveraging tasks focused on syntax (SYN/POS) or morpheme boundaries (SEG), the improvements extend to lexical diacritics as well.",the proposed joint diacritic restoration model is also helpful in settings beyond word final syntactic related diacritics.,reasoning test_1307,"In an average development document with 201 candidates per mention, the number of pairwise queries needed to fully label a document is 15, 050, while the maximum number of discrete queries is only 201 (i.e., asking for the antecedent of every mention).","the average document can be fully annotated via discrete annotation in only 2.6% of the time it takes to fully label it with pairwise annotation, suggesting that our framework is also a viable exhaustive annotation scheme.",reasoning test_1308,"However, structured attributes in product catalogs are often sparse, leading to unsatisfactory search results and various kinds of defects.",it is invaluable if such structured information can be extracted from product profiles such as product titles and descriptions.,reasoning test_1309,"Our main idea is that by encouraging TXtract to predict the product categories using only the product profile, the model will learn token embeddings that are discriminative of the product categories.",we introduce an inductive bias for more effective category-specific attribute value extraction.,reasoning test_1310,The insight behind our loss function is that a product assigned underĉ could also be assigned under any of the ancestors ofĉ.,"we consider hierarchical multi-label classification and encourage TXtract to assign a product to all nodes in the path fromĉ to the root, denoted by (ĉ K ,ĉ K−1 , .",reasoning test_1311,It might be hard to say that the superior performance of TCattn is due to the neural architecture and attention scores rather than the richer training resources.,a comparison between TCattn and a model that uses both student essays and the source article is needed.,reasoning test_1312,"As creation is expensive, most annotated clinical datasets are small, such as for our task.",we look to alternative data sources for pre-training our model.,reasoning test_1313,"However, training on synonyms will allow for a greater variety of terms to be seen by our model than otherwise possible.","using all synonyms taken from the annotated subset of the UMLS, we pre-train our linker before training on the annotated clinical notes.",reasoning test_1314,"Because of this, the common practice is to test new methods on a small number of languages or domains, often semi-arbitrarily chosen based on previous work or the experimenters' intuition.",this practice impedes the NLP community from gaining a comprehensive understanding of newly-proposed models.,reasoning test_1315,"Another limitation is that generated data is of limited use for training models, since it contains simple regularities that supervised classifiers may learn to exploit.",we create IMP-PRES solely for the purpose of evaluating NLI models trained on standard datasets like MultiNLI.,reasoning test_1316,"Thus, we anticipate two possible rational behaviors for a MultiNLI-trained model tested on an implicature: (a) be pragmatic, and compute the implicature, concluding that the premise and hypothesis are in an 'entailment' relation, (b) be logical, i.e., consider only the literal content, and not compute the implicature, concluding they are in a 'neutral' relation.","we measure both possible conclusions, by tagging sentence pairs for scalar implicature with two sets of NLI labels to reflect the behavior expected under ""logical"" and ""pragmatic"" modes of inference, as shown in Table 2.",reasoning test_1317,"Overall, more than 20% of the treebanks in the UD 2.2 collection have flat structures in more than 20% of their training-set sentences.",a parsing approach taking into account the special status of headless structural representations can potentially benefit models for a large number of languages and treebanks.,reasoning test_1318,CRF models the conditional probability of the whole label sequence given the whole input sequence.,"instead of using the label distribution over individual token, we could use the probability distribution for the whole label sequence, to compute KL divergence.",reasoning test_1319,"But as explained in Sec.1, the adversarial loss of conventional VAT cannot be calculated on top of CRF.",vAT in the second set of Table.2 only applies CRF for label loss.,reasoning test_1320,And the meaning of each character changes dramatically when the context changes.,a CSC system needs to recognize the semantics and aggregate the surrounding information for necessary modifications.,reasoning test_1321,"By contrast, the similar characters are semantically distinct in CSC.",we deeply investigate the effect of our SpellGCN and propose several essential techniques.,reasoning test_1322,"Since the pronunciation similarity is more fine-grained compared with the shape similarity category, we combine the pronunciation similarities into one graph.",we construct two graphs corresponding to pronunciation and shape similarities.,reasoning test_1323,"However, more than half of the databases in Spider contain 5 tables or less.","we also report the coverage of attributes only considering the databases which have more than 5 tables, where Spider only covers 49.6% of attributes.",reasoning test_1324,Table 2 shows that our corpus contains a much higher lexical complexity of the questions than Spider (0.67 instead of 0.52).,"our approach seems to avoid trivial or monotonous questions, which also matches with our impression from manual inspection.",reasoning test_1325,Roy and Roth (2017) shown significantly lowered performance if highly similar MWPs are removed.,dataset diversity is more critical than the dataset size for accurately judging the true capability of an MWP solver.,reasoning test_1326,"Since MWPs are usually clearly specified (with a sure answer), there is no ambiguous interpretation once the answer is given.","as opposed to other corpora in which annotations (mostly linguistic attributes) are mainly based on human subjective judgment, the MWP answer/equation annotation is more objective and must be consistent.",reasoning test_1327,"Therefore, as opposed to other corpora in which annotations (mostly linguistic attributes) are mainly based on human subjective judgment, the MWP answer/equation annotation is more objective and must be consistent.","human carefulness, instead of human agreement, is a more critical issue in this task.",reasoning test_1328,"In grades 5 and 6, improved math skills enable students to solve difficult MWPs that require more aggregative operations and additional domain knowledge.",the grade level is a useful indicator of difficulty and can be employed to evaluate the capability of MWP solving systems.,reasoning test_1329,"The ""words"" in abstract syntax can be seen as word senses, which the concrete syntax can realize by different word forms.",the abstract function Love in Figure 7 corresponds to words in different languages used for expressing a certain sense of the English word love.,reasoning test_1330,"Segment analysis is also used when forming compound words in languages like Finnish, German, and Swedish.","english summer time translates compositionally to Swedish sommar+tid, which is rendered as sommartid.",reasoning test_1331,"Because of this information loss, one cannot expect that such details will be correctly deduced or guessed, especially without a wider context.",the default linearization by the GF generator (and the guessed linearization by the JAMR generator) of the AMR concept person in Figure 20 is in the singular form.,reasoning test_1332,"Recall that our method first collects the top M candidate chains, ordered by retrieval score (Section 3.2).",a simple baseline is to use that retrieval score itself as a measure of chain validity.,reasoning test_1333,"For output structure generalization, we formatted the answers as JSON to enable more complex zero-shot relation extraction tasks.","the models output answers as both text and JSON, in a seq-to-seq fashion, depending on the question type.",reasoning test_1334,"""Chris"", supposedly tagged with ""Person"" in this example sentence, is tagged as other labels in most cases.","in the predicting process, it is difficult to label ""Chris"" correctly.",reasoning test_1335,The advantage of stance over TS is indirect stances.,we also investigate how well various methods perform on indirect stance.,reasoning test_1336,"Custom methods for other targets also behave in a similar manner (c.f Figure 4), with certain targets like Putin outperforming the best OTS method, STB in this case, with fewer than 195 labeled tweets.","instead of having different training sizes for different targets, we use the same amount and find that the LR custom methods outperform OTS methods for all targets except Macron.",reasoning test_1337,"Although researchers have designed a lot of meaning representations, recent work focuses on only a few of them.",the impact of meaning representation on semantic parsing is less understood.,reasoning test_1338,A semantically equivalent program may have many syntactically different forms.,"if the training and testing data have a difference in their syntactic distributions of logic forms, a naive maximum likelihood estimation can suffer from this difference because it fails to capture the semantic equivalence (Bunel et al., 2018).",reasoning test_1339,"We regard Prolog, Lambda Calculus, and FunQL as domainspecific MRs, since the predicates defined in them are specific for a given domain.","the execution engines of domain-specific MRs need to be significantly customized for different domains, requiring plenty of manual efforts.",reasoning test_1340,"With these resources, we crossvalidate the correctness of annotations and execution engines by comparing the execution results of logical forms.",we found nearly 30 Prolog logical forms with annotation mistakes and two bugs in the execution engines of Prolog and FunQL.,reasoning test_1341,"We use two strategies for this purpose: (1) we explicitly introduce distinct divergence categories for unrelated sentences and sentences that overlap in meaning; and (2) we ask for annotation rationales (Zaidan et al., 2007) by requiring annotators to highlight tokens indicative of meaning differences in each sentence-pair.","our approach strikes a balance between coarsely annotating sentences with binary distinctions that are fully based on annotators' intuitions (Vyas et al., 2018), and exhaustively annotating all spans of a sentence-pair with fine-grained labels of translation processes (Zhai et al., 2018).",reasoning test_1342,"Without this, the model can learn the goal task (such as translation) with reasonable accuracy, but the learned semantic embeddings are of poor quality until batch sizes approximately reach 25,000 tokens.","we use a maximum batch size of 50,000 tokens in our ENGLISHTRANS, BILIN-GUALTRANS, and BGT W/O PRIOR, experiments and 25,000 tokens in our BGT W/O LANGVARS and BGT experiments.",reasoning test_1343,We hypothesize that these datasets contain many examples where their gold scores are easy to predict by either having similar structure and word choice and a high score or dissimilar structure and word choice and a low score.,"we split the data using symmetric word error rate (SWER), 7 finding sentence pairs with low SWER and low gold scores as well as sentence pairs with high SWER and high gold scores.",reasoning test_1344,"Most existing KGs are built separately by different organizations, using different data sources and languages.",kGs are heterogeneous that the same entity may exist in different kGs in different surface forms.,reasoning test_1345,Equivalent entities in two KGs are usually neighbored by some other equivalent entities.,structure information in KGs are very important for discovering entity alignments.,reasoning test_1346,"However, such a hard cutoff of the search space makes these approaches insufficient in the exploration of the (already scarce) labeled data and limited by the ranker since most sentences are discarded, 2 albeit the discarded sentences are important and could have been favored.","although these studies perform better than directly applying their base SDS models (See et al., 2017;Tan et al., 2017) to MDS, they do not outperform state-of-the-art MDS methods (Gillick and Favre, 2009;Kulesza and Taskar, 2012).",reasoning test_1347,This variant solves L2 but may re-expose the RL agent to L1 since its MMR module and neural module are loosely coupled and there is a learnable layer in their combination.,"we design a second variant, RL-MMR SOFT-ATTN , which addresses both L1 and L2 by tightly incorporating MMR into neural representation learning via soft attention.",reasoning test_1348,"The number of epochs is set to 10,000 and we adopt early stopping -the training process terminates if RL-MMR cannot achieve better results on the validation set after 30 continuous evaluations.","the runs often terminate before 5,000 epochs, and the overall training time ranges from 40 to 90 minutes.",reasoning test_1349,"Note that since labels are transmitted using a model, the model has to be transmitted as well (directly or indirectly).",the overall codelength is a combination of the quality of fit of the model (compressed data length) with the cost of transmitting the model itself.,reasoning test_1350,"However, despite multiple recent efforts on this newly proposed dataset, published work so far in multimodal KPE has either omitted available features (Sun et al., 2020), or has adopted a brute force approach to feature encoding (direct concatenation of raw features) (Xiong et al., 2019).",in this work we strive for a more nuanced approach to leveraging available features for multimodal KPE and offer a uniquely comprehensive approach.,reasoning test_1351,"Second, the behavior and characteristics of visual and text modality features are different from one another.",the first-step self-attention should be modeled in separate networks for text and visual features.,reasoning test_1352,"First, unlike previous continual learning works on image classification (Kirkpatrick et al., 2017;Zenke et al., 2017), VisCOLL requires predicting, for example, a noun with a verb or an adjectivewhich results in a significantly large search space.","of this increased search space, memory based continual methods (Robins, 1995;Aljundi et al., 2019a) cannot expect to store prototypes of each visited compositions.",reasoning test_1353,"Second, the increased search space makes it infeasible to view all possible combinations of atomic words at train time.","to succeed on VisCOLL, models should generalize to novel compositions at test time (also called composition generalization) (Lake and Baroni, 2017; Keysers et al., 2020).",reasoning test_1354,"In contrast, task boundaries in VisCOLL are unknown and ""smooth"" (i.e., with gradual transitions between tasks)-a setting that is closer to real-world situations.","visCOLL rules out many continual learning algorithms which require explicit task identity and boundary (Kirkpatrick et al., 2017;Rusu et al., 2016).",reasoning test_1355,"Finally, the algorithm greedily tries to put the proposed number of instances into each time interval to construct the stream.",the constructed data stream has a gradually shifting task distribution without strict boundaries.,reasoning test_1356,"For (i), the target probability of each word is set proportional to the square root of its frequency in the visited stream.","highly frequent words would take a smaller portion compared to reservoir sampling where the word distribution in the memory is linear to its frequency in the visited stream, leaving space for storing more diverse examples.",reasoning test_1357,We find that (i) diversifying storage improves performance at the early stage of the stream but not in the later stages; (ii) prioritizing words likely to be forgotten does not improve performance.,future works should find a balance between storing more diverse or important examples and respecting original data distribution.,reasoning test_1358,"Unlike Multistream, which leverages fine-grained region-level features, our results are reported on global framelevel features.",it may be difficult for HERO to capture the inconsistency between hypothesis and video content.,reasoning test_1359,Adversaries could easily exploit this to disseminate realistic-looking neural fake news.,exploring the visual-semantic consistency between the article text and images could prove to be an important area for research in defending against generated disinformation.,reasoning test_1360,"In contrast, Type C articles have the potential to be exploited by adversaries to disseminate large amount of misleading disinformation due to its generated article contents.",our proposed approach is geared towards addressing this particular type of generated articles.,reasoning test_1361,"Furthermore, each OIE system extracts the interested facts in the desired form at the time of development and omits the uninterested facts.",they are not adaptable to new requirements.,reasoning test_1362,It is also very abstract that sentences with the same meaning but in very different expressions will share the same AMR annotation.,aMR is difficult to label (cost about 10 min to label a sample 4 ) and is very difficult to learn.,reasoning test_1363,"Due to resource limitations and in the spirit of environmental responsibility (Strubell et al., 2019;Schwartz et al., 2019), we conduct our experiments on the base models: BERT-baseuncased, RoBERTa-base, and DistilBERT-baseuncased.",the BERT/RoBERTa models we use have 12 transformer blocks (0–11 indexed) producing 768-dimension vectors; the DistilBERT model we use has the same dimension but contains 6 transformer blocks (0–5 indexed).,reasoning test_1364,"Masking and finetuning achieve accuracy 84.79% and 85.25%, which are comparable and both outperform the baseline 50%, demonstrating successful knowledge transfer.",finetuning and masking yield models with similar generalization ability.,reasoning test_1365,"For POS, after wordpiece tokenization, we see 1 sentence in dev and 2 sentences in test have more than 126 (the [CLS] and [SEP] need to be considered) wordpieces.",we exclude 5 annotated words in dev and 87 annotated words in test.,reasoning test_1366,"Intuitively, if a training example has a low sentence-level probability, it is less likely to provide useful information for improving model performance, and thus is regarded as an inactive example.",we adopt sentence-level probability P (y|x) as the metric to measure the activeness level of each training example: where T is the number of target words in the training example.,reasoning test_1367,"To save the time cost, a promising strategy is to let the identification model take the responsibility of rejuvenation.",we used the TRANSFORMER-BIG model with the large batch configuration trained on the raw data to accomplish both identification and rejuvenation.,reasoning test_1368,"In the target vocabulary, words are sorted in the descending order of their frequencies in the whole training data, and the frequency rank of a word is its position in the dictionary.","the higher the frequency rank is, the more rare the word is in the training data.",reasoning test_1369,"The language coverage could be broadened with other knowledge, such as that encoded in WALS, to distinguish even more language properties.","to obtain the best of both views (KB and task-learned) with minimal information loss, we project a shared space of discrete and continuous features using a variant of canonical correlation analysis (Raghu et al., 2017).",reasoning test_1370,We then can assess their applicability on multilingual NMT tasks that require guidance from language relationships.,language clustering and ranking related partner languages for (multilingual) transfer are our study cases (§6).,reasoning test_1371,"The list is a crafted set of concepts for comparative linguistics (e.g. I, eye, sleep), and it is usually processed by lexicostatistics methods to study language relationship through time.",we prefer to argue that corpus-based embeddings could partially encode lexical similarity of languages.,reasoning test_1372,"However, the lack of an unequivocal definition of hate speech, the use of slurs in friendly conversations as opposed to sarcasm and metaphors in elusive hate speech (Malmasi and Zampieri, 2018), and the data collection timeline (Liu et al., 2019) contribute to the complexity and imbalance of the available datasets.","training hate speech classifiers easily produces false positives when tested on posts that contain controversial or search-related identity words (Park et al., 2018;Sap et al., 2019;Davidson et al., 2019;Kim et al., 2020).",reasoning test_1373,The initial list of predefined keywords such as the ones we have shown in Table 1 carries additional words in English and Arabic.,"for these two datasets, we have measured bias using two predefined lists of keywords: the initial list and one that is specific to the dataset in question.",reasoning test_1374,The datasets were too large to be fit into GPU as a whole.,"we shifted the neighbor search to CPU, but that again took more than a day to complete.",reasoning test_1375,The reason is that both of them synthesize sentences are either of less diverse or of less quality.,"we propose a controllable sampling strategy to generate reasonable source sentences: at each decoding step, if the word distribution is sharp then we take the word with the maximum probability, otherwise the sampling method formulated in Eq.",reasoning test_1376,"However, observe that this new rule performs two tests that could be done independently: 1. the right span boundary of the first antecedent must match the left span boundary of the second one; 2. the right span boundary of the second antecedent must match the left span boundary of the third antecedent.","we can break the deduction into two sequential deductions, first testing the ""k"" boundary then the ""l"" boundary.",reasoning test_1377,"A popular solution tackles the grammatical error correction as a monolingual machine translation task where ungrammatical sentences are regarded as the source language and corrected sentences as the target language (Ji et al., 2017;Chollampatt and Ng, 2018a).","the GEC can be modeled using some relatively mature machine translation models, such as the sequence-to-sequence (seq2seq) paradigm (Sutskever et al., 2014).",reasoning test_1378,"Assuming that a token is selected to be replaced, and its candidate substitutes are retrieved by the mapping, we want the selected substitute can fit in well with the token's context and maintain both the semantic and syntactic coherence.","we define a function s based on the edit distance (Marzal and Vidal, 1993) to estimate the similarity scores between two sentences.",reasoning test_1379,We would also like to know which type of words to modify is most likely to form a successful attack.,we calculate the correction rates of the newly added errors with different types of part-ofspeech.,reasoning test_1380,"As it is difficult to ensure high quality annotations for 21 languages using crowdsourcing, we relied on colleagues by reaching out on NLP and Linguistics mailing lists.",the number of evaluators per language varies (cf.,reasoning test_1381,"Specifically, when the LLL model is trained on a new task, we assign a teacher model to first learn the new task, and pass the knowledge to the LLL model via knowledge distillation.",the LLL model can better adapt to the new task while keeping the previously learned knowledge.,reasoning test_1382,"Inspired by knowledge distillation (Bucila et al., 2006;Hinton et al., 2015;Kim and Rush, 2016), in which a student (smaller) model is trained to imitate the behavior of a teacher (larger) model in order to reach the performance closer to the teacher model, the LLL model in L2KD can be seen as a weak learner that needs to compress knowledge from different tasks into a compact single model.","lll can benefit from the similar procedure of knowledge distillation, although the model size is equal to its teacher model.",reasoning test_1383,"By way of assuming Ψ ∼ GEM(γ), an HDP assumes an infinite number of topics are present a priori, with the number of tokens per topic decreasing rapidly with the topic's index in a manner controlled by γ.","under the model, a topic with a sufficiently large index should contain no tokens with high probability.",reasoning test_1384,"The subcluster split-merge algorithm is designed to converge with fewer iterations, but is more costly to run per iteration.",we used a fixed computational budget of 24 hours of wall-clock time for both algorithms.,reasoning test_1385,"However, several alignment-based STS methods employ Euclidean distance or dot product to compute the word dissimilarity.",the question arises as to which is the most suitable method for computing word dissimilarity.,reasoning test_1386,"The key difference lies in the fact that ADD treats a sentence as a single vector (the barycenter of direction vectors), whereas WRD treats a sentence as a set of direction vectors.","it is natural that WRD had a positive effect on STS tasks, given that STS tasks require the word alignment (i.e., they assume that words are treated disjointedly).",reasoning test_1387,"The phenomenon, however, is often overlooked in existing matching models.","the feature vectors are constructed without any regularization, which inevitably increases the difficulty of learning the downstream matching functions.",reasoning test_1388,"In WD-Match, a Wasserstein distance-based regularizer is defined to regularize the features vectors projected from different domains.",the method enforces the feature projection function to generate vectors such that those correspond to different domains cannot be easily discriminated.,reasoning test_1389,G module is implemented as a two-layer MLP (the number of neurons in the second layer is set as one).,"the additional computing cost comes from the training of the two-layer MLP, which is of O(T * N * K * 1), where T is the number of training iterations, N number of training examples, K number of neurons in the first layer of MLP (without considering the compute cost of the activation function).",reasoning test_1390,"We observe that the proposed two-stage methods, GW-GeoMM and SL-GeoMM, obtain scores on par with state-of-the-art methods, UMWE and UMH.",multilingual approaches can learn an effective multilingual space for closeby languages.,reasoning test_1391,"In addition, most previous works do not consider the importance of each teacher layer and use the same layer weights among various tasks, which create a substantial barrier for generalizing the compressed model to different NLP tasks.",an adaptive compression model should be designed to transfer knowledge from all teacher layers dynamically and effectively for different NLP tasks.,reasoning test_1392,"Since different attention and hidden layers of BERT can learn different levels of linguistic knowledge, these layers should have different weights for various NLP tasks.",we propose a cost attention mechanism to assign weights for each attention and hidden layers automatically.,reasoning test_1393,"Given an event like ""Jerry repels Tom's attack"", to approximate the phrases in Atomic, we firstly annotate the person roles, that is, replacing the subject person with ""PersonX"" and the other person with ""PersonY"".","we get ""PersonX repels PersonY's attack"".",reasoning test_1394,"However, the counterfactual samples generated by most previous methods are simply added to the training data for augmentation and are not fully utilized.","we introduce a novel selfsupervised contrastive learning mechanism to learn the relationship between original samples, factual samples and counterfactual samples.",reasoning test_1395,"Although the MAMS described in Section 5.3 provides a training set with diversity, it remains difficult to improve aspect robustness for other domains, or future new datasets.","we propose a flexible method, adversarial training, for aspect robustness, which is applicable to any given dataset.",reasoning test_1396,A possible explanation is that the short reference summaries fail to capture all the important information of original documents.,directly comparing with document representations will suffer much less information loss.,reasoning test_1397,We make use of the heuristics that nearby sequences in the document contain the most important information to recover the masked words.,"the challenging retrieval part can be replaced by soft-attention mechanism, making our model much easier to train.",reasoning test_1398,"During training, we fix the position embeddings for the pre-appended special tokens, and randomly select 64 continuous positions from 0 to 564 for the other words.",the model can be used to encode longer sequences in downstream tasks.,reasoning test_1399,Rather a pointer is made to the location where the number 8 occurred in an algebraic word problem.,"using such an ""operand-context pointer"" enables a model to access contextual information about the number directly, as shown in Figure 1 (c); thus, the operand-context separation issue can be addressed.",reasoning test_1400,"We observed that the existing pure neural model's performance on low-complexity dataset of MAWPS was relatively high at 78.9%, compared to that of high-complexity dataset of ALG514 (44.5%).","using Expression tokens and operand-context pointers contributed to higher performance when applied to high-complexity datasets of ALG514 and DRAW-1K, as shown in Table 5.",reasoning test_1401,"As the expression fragmentation issue can arise for each token, probability of fragmentation issues' occurrence increases exponentially as the number of unknowns/Op tokens in a problem increases.","the vanilla Transformer model, which could not handle the fragmentation issue, yields low accuracy on high-complexity datasets.",reasoning test_1402,"When generating solution equations for the comparative phrases, the order of arguments is a matter for an equation that contains non-commutative operators, such as subtractions or divisions.",errors occurred when the order of arguments for comparative phrases with non-commutative operators was mixed up.,reasoning test_1403,"However, neural networks are less interpretable and need to be trained with a large amount of data to make it possible to learn such implicit logic.",we consider tackling the problems by exploiting logic knowledge.,reasoning test_1404,"Notice that the intent classifier is typically implemented using standard text classification algorithms (Weiss et al., 2012;Larson et al., 2019;Casanueva et al., 2020).","to perform OOS sample detection, methods often rely on one-class classification or threshold rejectionbased techniques using the probability outputs for each class (Larson et al., 2019) or reconstruction errors (Ryu et al., 2017, 2018).",reasoning test_1405,"However, in practice, collecting OOS data can be a burden for intent classifier creation, which is generally carried out by domain experts and not by machine learning experts.","in the ideal world, one should rely solely on in-scope data for this task because it is very difficult to collect a set of data that appropriately represents the space of the very unpredictable OOS inputs.",reasoning test_1406,It is a language representation model pre-trained on unlabeled text and conditioned on both the left and right contexts.,a simple output layer can be fine-tuned to attain strong results in many different tasks.,reasoning test_1407,"But, although BERT+ has not been significantly better than BERT with PT-BR chatbots, the proposed word graph-based approach had a great impact in reducing that 5% difference, since both present similar EER values, and still had a huge impact in FRR rates since BERT+ presented significantly better values.","it is likely that by improving the mapping of sentence and graph embeddings for those datasets, and consequently reducing that 5% gap in ISER, BERT+ will stand out as a significantly better approach than BERT.",reasoning test_1408,"Unlike the constraintbased methods, which use translations of the focus word to post-process the output of a WSD system, t emb provides the translation information in the form of an embedding directly as input to the WSD system.",translation information is used as an additional feature to improve sense predictions of the base WSD system.,reasoning test_1409,"Although attention weights in some NMT systems may be used to derive word alignment, such an approach is not necessarily more accurate than off-the-shelf alignment tools (Li et al., 2019).",our approach is to instead identify the word-level translations by performing a bitext-based alignment between the source focus words and their translations.,reasoning test_1410,"To highlight the improvement in contextualization alone, since the Within Word tasks before lemmatization may contain different word forms of the same lemma as the target words in each pair, we lemmatize all the target words in the dataset.",each pair in the Within Word tasks now contains the identical target word.,reasoning test_1411,"Moreover, latent-variable models can represent multimodal distributions.","for these conditional tasks, the latent variable can be used as a source of stochasticity to ensure more diverse translations (Pagnoni et al., 2018) or answers in a dialogue (Serban et al., 2017).",reasoning test_1412,"However, this model would not be more useful than a standard, left-to-right autoregressive models.","it is necessary to check that such useless, purely local features are not learned.",reasoning test_1413,"Moreover, if the latent variable of the VAEs did encode the label perfectly and exclusively, they would reconstruct the first words or recover sentence length with much lower accuracy than what is observed.",we conclude that seq2seq VAEs are biased towards memorizing the first few words and the sentence length.,reasoning test_1414," As Long et al. (2019) reported, the max-pooling operator is better than the average operator, both when the encoder is a LSTM and BoW (possibly because the maximum introduces a non-linearity).",we use the maximum operator.,reasoning test_1415,The latent variable is more predictive of global features and memorisation of the first words and sentence length is decreased.,these models are more suitable for diverse and controllable generation.,reasoning test_1416,"However, information related to the second word in the latent variable can help the decoder predict the first word.",gains in position i can only be attributed to information pertaining to the words in positions >=i.,reasoning test_1417,"However, previous work neglects the fact that there is usually a limited time budget to interact with domain experts (e.g., medical experts, biologists) and high-quality natural language explanations are expensive, by nature.",we focus on eliciting fewer but more informative explanations to reduce expert involvement.,reasoning test_1418,"Contrastive Natural Language Explanations Existing research in social science and cognitive science (Miller, 2019; Mittelstadt et al., 2019) suggests contrastive explanations are more effective in human learning than descriptive explanations.",we choose contrastive natural language explanations to benefit our learners.,reasoning test_1419,"Unlike previous active learning on data-points, our class-based active learning is empirically insensitive to the change of random seeds and hyper-parameter (e.g., batch size).",we could collect the explanations in an on-demand manner.,reasoning test_1420,"However, Table 10 shows that even when continuing to train this model for a long time no multilinguality arises.",in this configuration the model has enough capacity to model the languages independently of each other -and due to the modifications apparently no incentive to try to align the language representations.,reasoning test_1421,"Recent work (Yu et al., 2020) shows that gradient conflict between dissimilar tasks, defined as a negative cosine similarity between gradients, is predictive of negative interference in multi-task learning.",we study whether gradient conflicts exist between languages in multilingual models.,reasoning test_1422,"As shown by the expansions per step in Table 2, VAR-STREAM uses the batch capacity of 100 most efficiently.","vAR-STREAM is faster than both vAR-BATCH and FIXED, despite overhead which is exacerbated in a small model.",reasoning test_1423,"However, since some languages do not typically use whitespace between words (e.g., Thai), we used the heuristic of SentencePiece meta symbol U+2581 to designate the beginning of the word.",a word is defined as the token span between two successive U+2581 symbols.,reasoning test_1424,Using three encoders did not yield clear improvements over two encoders.,we do not experiment with using more than three encoders.,reasoning test_1425,Bansal et al. (2019) demonstrated that better feature learning from supervised tasks helps few-shot learning.,we also evaluate multi-task learning and multi-task meta-learning for few-shot generalization.,reasoning test_1426,"A few recent projects reveal that GLUE tasks may be not sophisticated enough and do not require much tasks-specific linguistic knowledge (Kovaleva et al., 2019;Warstadt et al., 2019).","superGLUE benchmark, being more challenging, becomes much more preferable for evaluation of language models.",reasoning test_1427,"Furthermore, the normalization involves the prediction of both operation types and token labels, enabling TNT to learn from more challenging tasks than the standard task of masked word recovery.",the experiments demonstrate that TNT outperforms strong baselines on the hate speech classification task.,reasoning test_1428,"However, we showed that using En dev accuracy for checkpoint selection leads to somewhat arbitrary zero-shot results.","we propose reporting oracle accuracies, where one still fine-tunes using English data, but selects a checkpoint using target dev.",reasoning test_1429,"They use different language models and word embeddings (e.g., BERT, RoBERTa, or BiRNN), and have been trained on different data (e.g., DPR (Rahman and Ng, 2012), WinoGrande, or no additional data).",it is unclear whether the choice of the objective function is essential for pronoun resolution tasks.,reasoning test_1430,The relatively free word order this allows creates much less emphasis on the collocation of a semantic unit's tokens.,as conversational assistants progress toward multiple languages it's important to consider that constraints that are acceptable if only English is considered will not analogously scale to other languages.,reasoning test_1431,The refinement approach delegates responsibility of sessionbased semantic parsing to a separate dialog component.,refinement approaches tend to have a very limited ontology due to the semantic parser operating over a fixed input (non-session utterances).,reasoning test_1432,"However, for leaves with larger number of examples statistical significance alone is insufficient, because there are a large number of cases where there are small but significant differences from the ratio of agreement expected by chance.","in addition to comparing the p-value we also compute the effect size which provides a quantitative measure on the magnitude of an effect (Sullivan and Feinn, 2012).",reasoning test_1433,Whereas the Statistical Threshold uses effect size with the significance test which takes into account the sample size within a leaf leading to better leaves.,we choose to use Statistical-Threshold for all our simulation experiments.,reasoning test_1434,"As (1) illustrates, not all sentences in a response to an advice-seeking question constitute advice.","we want annotators to highlight which parts of the response to a question are advice, and which are not.",reasoning test_1435,"For instance, the ""activation""-anchored event (Figure 2) is both THEME and CAUSE of ""induced""and ""promote""-anchored event heads, respectively.","both r and h are multi-label, and the label for ""activation"" is encoded as +REGULATION, [THEME, CAUSE] where the order of r and h items is preserved.",reasoning test_1436,"In a crowdsourcing setting, we oftentimes hire a much larger set of annotators that are not professionally trained and may be only working on the task sporadically.",it is infeasible to ask them to follow detailed guidelines.,reasoning test_1437,"In our experiments, we aim at evaluating the multilingual and the cross-lingual question answering capabilities of different models.",we split the data in order to support both evaluation strategies: Multilingual and Cross-lingual.,reasoning test_1438,"Social biases appear to be a natural component of human cognition that allow people to make judgments efficiently (Kahneman et al., 1982).","they are often implicit-people are unaware of their own biases (Blair, 2002; Bargh, 1999)-and manifest subtly, e.g., as microaggressions or condescension (Huckin, 2002;Sue, 2010).",reasoning test_1439,"The main challenge is encouraging the model to focus on text features that are indicative of bias, rather than artifacts in data that correlate with the gender of the addressee but occur because of confounding variables (confounds).",the core of our methodology focuses on reducing the influence of confounds.,reasoning test_1440,"However, we only want the model to learn that references to appearance are indicative of gender if they occur in unsolicited contexts.","our model needs to account for the effects of O TXT: Because of correlations between W GEN and O TXT, COM TXT values may contain features that are predictive of W GEN, but are caused by O TXT, rather than by W GEN. We face a similar problem with W TRAITS.",reasoning test_1441,"While there may still be overlap in some latent W TRAITS, we expect there to be less overlap in W TRAITS between the train and test set than within the train set.",improved performance over the held-out test set would suggest that demotion effectively reduces the influence of the latent confounding variables-the model learns characteristics of comments addressed to women generally rather than characteristics specific to the individual people in the training set.,reasoning test_1442,"Because most gender-related microaggressions target women, if our model predicts that the reported microaggression was addressed to a woman (e.g. W GEN = F), we assume that the post is a gender-tagged microaggression.",our models are not trained at all for identifying gender-tagged microaggressions.,reasoning test_1443,"However, by controlling for O TXT, propensity matching discards many of these comments.","by demoting a confounding variable, we make the prediction task more difficult.",reasoning test_1444,"Specifically, for the 6K words, we report the mean precision and recall of every 10 consecutive words in the vocabulary.","we have 600 data points, each representing precision/recall of 10 words.",reasoning test_1445,The two UI elements have very similar images (magnifiers) although they are for searching different objects.,context information is critical for models to decode the correct objects.,reasoning test_1446,"To avoid information leaks, the split was done app-wise so that all the screens from the same app will not be shared across different splits.","all the apps and screens in the test dataset are unseen during training, which allow us to examine how each model configuration generalizes to unseen conditions at test.",reasoning test_1447,"If only textual inputs are given, they cannot effectively incorporate visual knowledge in their representations.",their help for entailing the contradiction between a and b is limited.,reasoning test_1448,"Fourth, individuals will mirror the language as a way of decreasing social distance which can increase trust (Scissors et al., 2008); Wang et al. (2015) found that lexical alignment is associated with increased emotional support.",we include a feature for lexical alignment as the % of the condolence's words that were also used in the distress comment.,reasoning test_1449,"To get rid of this annotation burden, we formulate the problem from the perspective of Multiple Instance Learning (MIL; Keeler and Rumelhart, 1992).",our model learns to spot story attributes in reviews in a weakly supervised fashion and does not expect direct tag level supervision.,reasoning test_1450,"However, we expect a latent correlation between Y P and Y C that can be jointly modeled while modeling P (Y P |X), hence helping the extraction of Y C without any direct supervision.","we first supervise a model containing a synopsis encoder and a review encoder to learn P (Y P |X) (Section 4.1), and later we use the trained review encoder to generate complementary tagset Y C (Section 4.2).",reasoning test_1451,"For example, we observe that the model usually puts higher attention weights on opinion-heavy words in the reviews.",we use the attention weights on words and sentences in reviews to extract an additional open-vocabulary tagset Y C .,reasoning test_1452,"However, directly fine-tuning such models for long texts like synopses and reviews is extremely memory expensive.","we employ Sentence-BERT (SBERT; Reimers and Gurevych, 2019) in our work, which is a state-of-the-art universal sentence encoder built with pre-trained BERT (Devlin et al., 2019).",reasoning test_1453,"Figure 7(a) shows that, for the predefined tags, our tagsets were more relevant than the baseline ones for 57% movies, the baseline tags were better than HN(A)+MIL for 24% movies, and both systems were equally performing for 19% movies.",we get further verification of Q2.,reasoning test_1454,"Results in Table 4 show that, our system can indeed predict tags that are very relevant to the new types of stories.",we conclude that our approach also shows great promise for other domains and can be extended with little effort.,reasoning test_1455,"Beyond its interpretation as shared information, mutual information gives little in terms of interpretability: It has no consistent reference points, beyond that the minimum possible MI is zero.",several variants of MI are preferred in community detection.,reasoning test_1456,"Combinatory categorial grammar (CCG) is a lexicalized grammatical formalism, where the lexical categories (also known as supertags) of the words in a sentence provide informative syntactic and semantic knowledge for text understanding.","ccG parse often provides useful information for many downstream natural language processing (NLP) tasks such as logical reasoning (Yoshikawa et al., 2018) and semantic parsing (Beschke, 2019).",reasoning test_1457,"As high-quality dependency parsers are not always available, we do not want our CCG supertaggers to rely on the existence of dependency parsers.",we need another way to extract useful word pairs to build GCN models.,reasoning test_1458,"It is not very convenient to utilize a large amount of unlabeled data in the baseline method wt, since this will directly increase the amount of augmented data.",some of augmented data may not be utilized before sequence tagging models converge.,reasoning test_1459,"Their loss is not applicable in our setting, because the embedding space of an autoencoder is not unit-normalized like word vectors typically are.",we employ cosine loss and leave the exploration of other regression losses to future work.,reasoning test_1460,"Our model’s performance is close to that of Wang et al. (2019), even without FGIM at inference time.",our model has a much lower computational overhead.,reasoning test_1461,"Our motivation comes from the fact that sentence generation along parse trees can intrinsically capture and maintain the syntactic information (Eriguchi et al., 2017;Aharoni and Goldberg, 2017;Iyyer et al., 2018), and show better performances than sequential recurrent models (Li et al., 2015;Iyyer et al., 2014).",we design a novel tree-based autoencoder to generate adversarial text that can simultaneously preserve both semantic meaning and syntactic structures of original sentences.,reasoning test_1462,"Moreover, the tree structure allows us to modify the tree node embedding at different tree hierarchies in order to generate controllable perturbation on word level or sentence level.","we explore the following two types of attacks at root level and leaf level T3(SENT) and T3(WORD), which are shown in Figure 3 and Figure 4.",reasoning test_1463,"From the table, we find using different initialization methods will greatly affect the attack success rates.",the initial sentence selection methods are indeed important to help reduce the number of iteration steps and fastly converge to the optimal z * that can attack the model.,reasoning test_1464,"Previous methods craft adversarial samples mainly based on specific rules (Li et al., 2018;Gao et al., 2018;Alzantot et al., 2018;Ren et al., 2019;Jin et al., 2019;Zang et al., 2020).",these methods are difficult to guarantee the fluency and semantically preservation in the generated adversarial samples at the same time.,reasoning test_1465,"On the other hand, BERT is a pre-trained masked language model on extremely large-scale unsupervised data and has learned general-purpose language knowledge.",bERT has the potential to generate more fluent and semantic-consistent substitutions for an input text.,reasoning test_1466,"While most words are still single words, rare words are tokenized into sub-words.",we treat single words and sub-words separately to generate the substitutes.,reasoning test_1467,"Since the average sequence length is relatively long, the target model tends to make judgments by only a few words in a sequence, which is not the natural way of human prediction.","the perturbation of these keywords would result in incorrect prediction from the target model, revealing the vulnerability of it.",reasoning test_1468,"Nevertheless, candidates generated from the masked language model can sometimes be antonyms or irrelevant to the original words, causing a semantic loss.",enhancing language models to generate more semantically related perturbations can be one possible solution to perfect BERT-Attack in the future.,reasoning test_1469,"Previous studies have created synthetic data from generic news summarization corpora which have a small set of aspects (e.g., ""sports"", ""health"" and other 4 aspects in (Frermann and Klementiev, 2019)).",models trained on these data tend to be restricted to the pre-defined set and fall short of summarizing on other diverse aspects.,reasoning test_1470,This evaluation setup clearly does not account for the entity distributions in the real data.,the reported performance scores do not reflect the effectiveness of these models when adapting to a new domain.,reasoning test_1471,Tag set extension Our first set of experiments are motivated by the fact that new types of entities often emerge in some domains such as medical and social media.,we evaluate the performance of our systems on recognizing new entity types as they emerge in the source domain.,reasoning test_1472,"Concretely, ""Apple"", ""Microsoft"", ""Coca"", and ""Cola"" all contain only alphabetical letters with the first one capitalized; Tokens ""Inc."", ""Corp."", and ""Co."" all are alphabetical letters with first letter capitalized, and they all end with a dot.","""Apple Inc."" and ""Microsoft Corp."" have the same sequence of structure vectors.",reasoning test_1473,"Moreover, for consecutive tokens with identical structure vectors, we combine them into one and hence ""Coca Cola"" shares the same structure vectors with the other two.","if one of the three is labeled as , we can apply the same sequence of labels to the other two examples as weak labels without actual human annotation.",reasoning test_1474,Current knowledge bases (such as ATOMIC) contain social rather than physical effects.,generation models trained on these knowledge bases incorrectly force the effects to be social.,reasoning test_1475,"We also found that when manually evaluating on ∼200 dev datapoints, the score was systematically a few (∼10%) points higher than BLEU, while the trends and model rankings remained the same, indicating robustness of the automatic metric.","the proposed metric aligns with human evaluation, and is able to use existing generation metrics thereby simplifying evaluation, allowing easier reproducibility.",reasoning test_1476,"Additionally, user feedback from these experiments suggested that we generate shorter entries, as longer ones frequently devolved into unrelated and incoherent sentences.","for our final experiments detailed in the next section, we also truncate model outputs to a maximum of four sentences.",reasoning test_1477,"The amount of author effort involved in evaluation, when combined with the relatively small size of the STORIUM community, can cause evaluation to take a considerable amount of time (i.e., to collect hundreds of judgements) as evidenced in our analysis (Section 5).","our platform is not currently suitable for ""instant"" evaluation of generated stories.",reasoning test_1478,"Supporting multi-domain conversations spanning multiple APIs quickly grows out of hand, requiring expert linguists and rigorous testing to ensure the grammatical correctness and appropriateness of generated utterances.",data-driven generative approaches have gained prominence.,reasoning test_1479,The main objective of this work is to improve attention supervisions for the purpose of better text classification.,we evaluate the three attention methods by their contribution to the classification performance.,reasoning test_1480,"On the other hand, for the head categories with many instances, general approaches such as the single CNN model (Kim, 2014;Liu et al., 2017) may be more effective in terms of performance and more efficient in terms of complexity.","our basic idea for tackling the problem of extremely imbalanced multi-label text classification is a hybrid solution that adapts a general approach (i.e., a Single network) for head categories and a few-shot approach for tail categories, so that we can take the advantages of both of them.",reasoning test_1481,"For example, if an instance has multiple categories and each category has a representation vector, it is difficult to learn a representation of this instance near the representations of all these categories through a single similarity output.",we propose a category-specific similarity in the Siamese structure to capture the rich information in the similarities.,reasoning test_1482,"Moreover, in some situations, parsers are supplied by third parties, making it impossible to alter them.","assuming parsers are a black box, it is indispensable to conduct research on an interactive approach for enhancing the text-to-SQL technique in complex scenarios.",reasoning test_1483,"Compared to aligning an NL question with a SQL query, the alignment of two NL questions is more reasonable because it utilizes a similar linguistic structure and can make better use of pre-trained models (e.g., BERT).","before the alignment, we restate the predicted SQL y into a natural language question x .",reasoning test_1484,"As users are non-expert and unfamiliar with database operations, simply picking an option is more natural and friendly.",the Question Generator is designed to generate a multi-choice question for each uncertain token.,reasoning test_1485,"As analyzed in Section 2, most of the uncertain tokens are related to database information.","for each uncertain token in an NL question, we find out the corresponding database and add all the column and table names into the candidate set.",reasoning test_1486,A major challenge to the proposed method is the absence of manual annotation of mentions and relations.,we propose an automatic annotation method (Section 2.4) based on aligning tokens in a SQL with corresponding question.,reasoning test_1487,The main reason is that the sentence embeddings in the pipeline approach are not shared.,"although these two subtasks can be well learned separately, they are not trained to collaborate with each other.",reasoning test_1488,"Indeed, in the rebuttal phase, authors reply reviewer's suggestions and questions very carefully to make the points clear, sometimes by citing the review arguments.","the structure and the format of rebuttals are relatively fixed, while reviewers have more flexibility in the style and the structure when writing reviews.",reasoning test_1489,"As shown in Figure 1, graph-attention will degenerate into a vanilla selfattention layer when the nodes in the graph are fully connected.",the graph-attention can be considered as a special case of self-attention.,reasoning test_1490,It is obvious that graph attention can not cover the last three attention patterns.,we draw a conclusion that self attention has advantages on generality and flexibility.,reasoning test_1491,"As regards WSD, instead, we are no longer bound by the long-standing limits of predefined sense inventories.","it is possible to give (i) a meaningful answer for words that are not in the inventory, and (ii) one that fits the meaning and the granularity required by a given context better than any sense in the inventory.",reasoning test_1492,Other errors stem from the fact that the model can only rely on the knowledge about possible definienda that it is able to store in the parameters during the pre-training and training stages.,"if the contextual knowledge is not sufficient to extrapolate a definition, the model which is required to always generate an output will hallucinate an answer on the basis of contextual clues, incurring the risk of introducing nonfactualities.",reasoning test_1493,"Reif et al. (2019) showed that senses are encoded with finer-grained precision in higher layers, to the extent that their representation of the same token tends not to be self-similar across different contexts (Ethayarajh, 2019; Mickus et al., 2020).","we hypothesise that abstract, type-level information could be codified in lower layers instead.",reasoning test_1494,"Interestingly, we find that the non-spurious correlations are more located in entity representation rather than context representation.",our method eliminates part of the spurious correlations between context representation and output labels.,reasoning test_1495,"Furthermore, fully-annotated training data will be rare due to the expensive cost.",training set can only cover a minor part of test mentions and diverse context patterns must be learned from minimal instances.,reasoning test_1496,"In open NER, however, most entities (e.g., movie, song and book) do not have such strong name regularity, and some of mentions can even be random utterances.",it is critical to evaluate the impact of name regularity on generalization.,reasoning test_1497,"This ability, obviously, is not what we desire because 1) in real world applications, most entity mentions are new and unseen, which means out-of-dictionary mentions will dominate the test process; 2) because the training instances are very limited in open situations, it is too expensive to achieve high mention coverage; 3) many longtail mentions in the training set would be oneshot, i.e., the mention only appears once in the training data.",it is necessary to exploit whether NER models can still reach reasonable performance in low mention coverage situation.,reasoning test_1498,"We believe this is because, as some previous studies in other tasks (Zhang et al., 2016;Lu et al., 2019) have pointed out, neural networks have strong ability and tendency to memorize training instances.","the high mention coverage will mislead the models to mainly memorize and disambiguate frequent entity names even though they are irregular, but ignore informative context patterns which are useful for generalization over unseen mentions.",reasoning test_1499,"While most graph-structured data has a wide variety of inherent geometric structures, e.g. partially tree-like and partially cyclical, the above studies model the latent structures in a single geometry with a constant curvature, limiting the flexibility of the model to match the hypothetical intrinsic manifold.",using a product of different constant curvature spaces might be helpful to match the underlying geometries of temporal knowledge graphs and provide high-quality representations.,reasoning test_1500,"Besides, there is no statistically significant difference in the model performance when using different optimizers, such as Riemannian Adam (RADAM) and Riemannian stochastic gradient descent (RSGD).","for the model's simplicity, we decide to use RSGD.",reasoning test_1501,"Given the known bias that female characters are portrayed with less agency (Sap et al., 2017), our goal is to re-balance their agency levels to be more on par with those of male characters.","we revise only the sentences describing female characters to have higher agency, using POWERTRANS-FORMER.",reasoning test_1502,"Meanwhile, in both EURLEX57K and AMAZON13K, the performance of ATTENTION-XML is competitive with both TF-IDF-based PLT-based methods and BIGRU-LWAN, suggesting that the bag-of-words assumption holds in these cases.",we can fairly assume that word order and global context (longterm dependencies) do not play a drastic role when predicting labels (concepts) on these datasets.,reasoning test_1503,"We simulate a setting where we have not enough information about the biases for training a debiased model, and thus biased examples should be identified automatically.",we only use the existing challenge test set for each examined task strictly for evaluation and do not use the information about their corresponding bias types during training.,reasoning test_1504,After the embeddings learning we can get a distribution over all of the subframe labels for each paragraph which is based on the cosine similarity between the embeddings of the paragraph and subframe labels.,our model combined with the labeled n-grams have the ability to expand the subframe labels to unlabeled text from other domains of the same topics without any human evaluation.,reasoning test_1505,"For each object with M instances in the image, we randomly remove m instances from the image s.t. m ∈ {0, . . . , M} using polygon annotations from the COCO (Lin et al., 2014) dataset","for each image, we get multiple masked images, with pixels inside the instance bounding-box removed, as shown in Figure 3.",reasoning test_1506,"However, the use of AL with deep pre-trained models for text classification -and BERT in particular -has so far received surprisingly little consideration.","while recent papers have demonstrated the value of AL for various deep-learning text classification schemes (Shen et al., 2017; Zhang et al., 2017; Siddhant and Lipton, 2018; Prabhu et al., 2019), the potential of AL combined with BERT is yet to be explored.",reasoning test_1507,"In this setting we assume high-precision heuristics that enable generating a relatively unbiased sample; but in many real-world cases such heuristics may not exist, or are expected to have limited coverage and would not enable sampling at will.","such heuristics cannot be assumed to yield a large training set, but may nevertheless be used for obtaining a small initial seed in an active learning setting.",reasoning test_1508,"However, unlike the above models, VVMAs focus on the low levels of execution: the VVMA is an architecture that speeds up matrix multiplications.","it is an efficient model that relates to hardware accelerators directly and it is universal, as matrix multiplication is the dominant computational factor for neural network inference.",reasoning test_1509,"While every neural network requires a certain budget of floating point operations for a target computation, how fast such computations are in practice depends not on the size of this budget but rather on the number of wall clocks needed in order to cover all floating point operations.",it is important to combine the software and the hardware advances in a co-design manner to optimize an efficient model for the correct metric: wall clocks.,reasoning test_1510,"As the Transformer is already getting noticeable impact in industrial settings, e.g., for machine translation and Web search, there is active research in developing more efficient Transformer architectures (Sanh et al., 2019; Kitaev et al., 2020; Beltagy et al., 2020;Zaheer et al., 2020).","with each new version of a Transformer architecture, new VVMA experiments would be needed in order to measure the potential improvements in efficiency that VVMA would yield.",reasoning test_1511,"As discussed in Section 2, most existing datasets were crowdsourced with a fixed set of prompts and no taboo constraints, which leads to limited diversity in the data.","models trained on the data may be brittle, failing when tested on new data in the same domain.",reasoning test_1512,"Following Shah et al. (2018); Rastogi et al. (2019), every grammar production in the simulator is paired with a template whose slots are synchronously expanded.",each dialog state or system act is associated with a template utterance.,reasoning test_1513,One observation about the standard decoder is that it has to predict long strings with closing brackets to represent a tree structure in the linearization.,the total number of decoding LSTM recursions is the number of tree nodes plus the number of non-terminals.,reasoning test_1514,"While TF-IDF based document linking provides a co-occurence-based similarity measure between documents and conversations, there is no guarantee such linking will improve dialog modeling performance.",we aim to train a linking model such that conditioning on linked documents has a positive effect on dialog modeling performance.,reasoning test_1515,"The reason is that we cannot always rely on expensive human resources to annotate large-scale task-specific labeled data, especially considering the inestimable number of tasks to be explored.","a reasonable attempt is to map diverse NLP tasks into a common learning problem-solving this common problem equals to solving any downstream NLP tasks, even some tasks that are new or have insufficient annotations.",reasoning test_1516,"Although large-scale pre-trained language models like BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019b) have achieved super-human performances on these datasets, there have been concerns raised about these models exploiting idiosyncrasies in the data using tricks like pattern matching (McCoy et al., 2019).","various stress-testing datasets have been proposed that probe NLI models for simple lexical inferences (Glockner et al., 2018), quantifiers (Geiger et al., 2018), numerical reasoning, antonymy and negation (Naik et al., 2018).",reasoning test_1517,"They show that this instability largely arises from high inter-example similarity, as these datasets typically focus on a particular linguistic phenomenon by leveraging only a handful of patterns.","following their suggestion, we conduct an instability analysis of CONJNLI by training RoBERTa on MNLI with 10 different seeds (1 to 10) and find that the results on CONJNLI are quite robust to such variations.",reasoning test_1518,"Since BERT converts each word into word pieces, we first propagate the gold SRL tags, which are for each word, to word pieces.","for a word with multiple word pieces, we assign the tag B-ARG to the first word piece, and the tag I-ARG to the subsequent word pieces.",reasoning test_1519,"It makes words a special case of motif instances, and one can easily construct a similar bipartite graph for words.","in the rest of this section, we use motif instances to explain our ranking score design.",reasoning test_1520,"Intuitively, a highly label-indicative motif instance would not belong to the seed sets of multiple labels.","when any motif instance is expanded to seed sets of multiple classes, we stop the expansion of motif instances of the corresponding motif pattern.",reasoning test_1521,We make the simplifying assumption that all changes in wikiHow's revision history are made for the better and therefore represent needed revisions to the original version of an article.,we treat all sentences that went through revision in wikiHowToImprove as requiring revision and all unrevised sentences from our extension (see 2.1) as requiring no revision.,reasoning test_1522,State of the art research for date-time 1 entity extraction from text is task agnostic.,"while the methods proposed in literature perform well for generic date-time extraction from texts, they don't fare as well on task specific date-time entity extraction where only a subset of the date-time entities present in the text are pertinent to solving the task.",reasoning test_1523,"“Let’s schedule for tomorrow. Next month, I plan on taking up Mr Baskerville’s case” Here, the model without Lt generates high attention weights for embeddings associated with “tomorrow”, since the localization of the attention weights is much more spread out.","it also uses the embeddings associated with ""tomorrow"" for predicting the label of ""next month"", and hence, predicts it to be relevant to scheduling when it is not.",reasoning test_1524,Each resume is manually annotated to its most appropriate CRC position by experts through several rounds of triple annotation to establish guidelines.,a high Kappa score of 61% is achieved for interannotator agreement.,reasoning test_1525,"At any time, there are various positions posted for the same level from different divisions, cardiology, renal, infectious disease, etc.",it is common to see resumes from the same applicant applying to several job postings within the same CRC level.,reasoning test_1526,"To the best of our knowledge, no previous study has explicitly focused on this question.",the goal of this paper is to provide an answer to this question.,reasoning test_1527,The representations for all mentions are then compared (usually sequentially) and mention pairs judged to be most similar are considered coreferent.,the mention representation is a key component in modern coreference resolution models.,reasoning test_1528,"The first signal is that the same name can appear multiple times in a text, and these mentions very likely corefer.",we can train mention representations to be similar for these mentions.,reasoning test_1529,"We do not want to use all mentions in the text since, for most of these, we don't have ground-truth clusters.",we only consider mentions that contain proper names that appear one or more times.,reasoning test_1530,The decision of when to stop querying more external articles needs to be made after successive evaluations of the candidate answers.,the decision making process is inherently sequential.,reasoning test_1531,"Moreover, the samples show that the generation process is more sophisticated than just a trivial path flattening (i.e., merging text from all edge parts followed by minimal edits).",the proposed approach can eventually become a part of a more sophisticated system converting graphs to a coherent textual story and vice versa.,reasoning test_1532,"Entities, relationship are all sequences of tokens that need to be generated properly, which does not fit well in the conventional KB completion evaluation framework.","we define a new meaningful commonsense KB completion task for generative models, and present the challenges that arise from it.",reasoning test_1533,"Such automatic alignment between knowledge graph and texts provides distant supervision (Mintz et al., 2009) for pre-training but it is bound to be noisy.",we design a selection strategy and only retain plausible alignments with high semantic overlap.,reasoning test_1534,"Apparently, these pairs cannot serve our goal to build a knowledge-grounded language model.",we propose a data selection step to suppress the noise and filter out the data pairs of our interests.,reasoning test_1535,We verify that there is zero RDF triple seen during pre-training though 31% entities are seen.,we can confirm the comparison with other baselines is still fair given no information from test/dev is leaked.,reasoning test_1536,"Unlike the previous two human-annotated datasets from different domains, WikiBio is also scraped from Wikipedia.",we filtered out the instances of KGTEXT from the first paragraph of the biography domain to ensure no overlap or leakage about Wikibio's dev/test set.,reasoning test_1537,"However, supervising the copy attention does not have much influence on the performance.","in the following experiments, we will run experiments for both encoding schemes with a copy mechanism without copy loss.",reasoning test_1538,"In a human-written document, subsequent text often refers back to entities and tokens present earlier in the preceding text.",it would increase coherence of text generated in downstream to incorporate the copy mechanism into pre-training on an unlabeled corpus.,reasoning test_1539,"For example, in Figure 1, one can observe that randomly sampling actions from the game vocabulary leads to several inadmissible ones like 'north a' or 'eat troll with egg'.","narrowing down the action space to admissible actions requires both syntactic and semantic knowledge, making it challenging for current systems.",reasoning test_1540,"The Jericho framework implements an admissible action handicap by enumerating all combinations of game verbs and objects at each state, and testing each action's admissibility by accessing the underlying simulator states and load-and-save functions.","the handicap runs no faster than a GPT-2 inference pass, and could in fact be unavailable for games outside Jericho.",reasoning test_1541,"In the MSCOCO validation set, 'man', 'elephant', and 'river' have more exposure, while 'traffic' and 'highway' are less mentioned.",the first group of references has a much higher consensus CIDEr score than the second group.,reasoning test_1542," QA systems accurately answer simpler, related questions such as “What profession does H. L. Mencken have?” and “Who was Albert Camus?” (Petrochuk and Zettlemoyer, 2018).","a promising strategy to answer hard questions is divide-and-conquer: decompose a hard question into simpler sub-questions, answer the sub-questions with a QA system, and recompose the resulting answers into a final answer, as shown in Figure 1.",reasoning test_1543,"Second, we train a decomposition model on the mined data with unsupervised sequence-to-sequence learning, allowing ONUS to improve over pseudo-decompositions.","we are able to train a large transformer model to generate decompositions, surpassing the fluency of heuristic/extractive decompositions.",reasoning test_1544,"This format makes it difficult to determine stance in the typical topic-phrase (pro/con/neutral) setting with respect to a single topic, as opposed to a position statement (see Topic and ARC Stance columns respectively, Table 2).","we collect annotations on both topic and stance, using the ARC data as a starting point.",reasoning test_1545,"However, prior work used static word embeddings and we want to take advantage of contextual emebddings.","we embed a document and topic jointly using BERT (Devlin et al., 2019).",reasoning test_1546,"For both stance labels and models, performance increases when the majority sentiment polarity agrees with the stance label (M + for pro, M for con).",we investigate how susceptible both models are to changes in sentiment.,reasoning test_1547,"Given the limited amount of labeled data, we first want to augment our dataset by labeling the unlabeled data as well.",we have a two step approach: 1. Build a model to resolve ambiguous time terms (AM versus PM) and label the unlabeled data. 2. Train a model for time of day prediction by hour using the augmented dataset ,reasoning test_1548,"These features do not exist in the unlabeled training set, to ensure the models learn to identify the proper AM/PM label without cheating.",we replace all the time phrases with the same special token.,reasoning test_1549,"While the dataset does not have the publication date of the book in the metadata, we were able to access the authors and the years the author was alive.",we created groupings of our data by time period based on the year of the author's birth.,reasoning test_1550,"However, since the semantic representation used by SCAN only covers a small subset of English grammar, SCAN does not enable testing various systematic linguistic abstractions that humans are known to make (e.g., verb argument structure alternation).",it is unclear whether progress on SCAN would generalize to natural language.,reasoning test_1551,"Our grammar does not generate VP-modifying PPs (the only PP verbal dependents are recipient to-phrases, which are always arguments rather than modifiers).","all PP modifiers in our dataset should strictly have an NP-attachment reading, although for human readers VP-attachment readings could sometimes be more prominent based on the lexical content of the sentences.",reasoning test_1552,"Note that as demonstrated by Reimers and Gurevych (2019), averaging context embeddings consistently outperforms the [CLS] embedding.","unless mentioned otherwise, we use average of context embeddings as BERT sentence embeddings and do not distinguish them in the rest of the paper.",reasoning test_1553,"Note that co-occurrence statistics is a typical tool to deal with “semantics” in a computational way — specifically, PMI is a common mathematical surrogate to approximate word-level semantic similarity (Levy and Goldberg, 2014; Ethayarajh et al., 2019).","roughly speaking, it is semantically meaningful to compute the dot product between a context embedding and a word embedding.",reasoning test_1554,"This is a common problem in the context of representation learining (Rezende and Viola, 2018; Li et al., 2019;Ghosh et al., 2020).","the resulted sentence embeddings can locate in the poorly-defined areas, and the induced similarity can be problematic.",reasoning test_1555,BERT embeddings may fail in such cases.,"we argue that the lexical proximity of BERT sentence embeddings is excessive, and can spoil their induced semantic similarity.",reasoning test_1556,"Concretely, the likelihood training pays little attention to the top ranks in terms of the target token probabilities (Welleck et al., 2020), or maximizing likelihood itself does not adequately reflect human language processing (Holtzman et al., 2019).","with the maximum likelihoodbased training, models learn to produce tokens frequently appearing in the data more often.",reasoning test_1557,"The more uniform a label distribution is, the less likely decision boundaries are biased in favor of frequent classes.","we aim to maximize the degree of uniformity of frequency distributions for both (i) tokens within each class and (ii) classes themselves (i.e., the sum of token frequencies within each class), to avoid the class imbalance problem (Buda et al., 2018) over the course of training.",reasoning test_1558,"Tokens in lyrics show a distribution largely different from general articles; for instance, repeated phrases are abundant in lyrics.",it provides an additional unique angle for model evaluations and comparisons.,reasoning test_1559,"We attribute this observation to the distinctive characteristics of lyrics, in which the same phrases are rhythmically repeated throughout the songs in the form of chorus or hook.","for lyrics dataset, forcing models to discourage reusing previously used tokens may adversely affect the likelihood of the generated texts.",reasoning test_1560,"The potential reason may be the lack of topics (i.e., keywords) in the model response, as illustrated in the graph that only contains context-topic nodes.","the graph reasoning module in our GRADE fails to induce an appropriate graph representation, which harms the coherence scoring.",reasoning test_1561,"Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers.","the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.",reasoning test_1562,"From the perspective of data collection for studying a particular phenomenon, TORQUE has done more on defining the task and developing a scalable crowdsourcing pipeline.",tORQUE is also much larger than QA-tempEval and the annotation pipeline of tORQUE can be easily adopted to collect even more data.,reasoning test_1563,"Despite the progress on the encoder side, the current stateof-the-art models use a rather standard decoder: it functions as a language model, where each word is generated given only the previous words.",one limitation of such decoders is that they tend to produce fluent sentences that may not retain the meaning of input AMRs.,reasoning test_1564,"However, this dataset does not explicitly state the topics of individual poems.","we automatically predict a topic for each poem with the help of our topic prediction model, which will be described in Subsection 3.3.",reasoning test_1565,"Suprisingly, however, 12 out of 20 and 9 out of 20 poems are recognized correctly for know and unknown words, respectively.",our model works well even for topics it has not seen during training.,reasoning test_1566,It is noteworthy that the text encoder part actually dominates the overall cost.,the gap between our model and the RGCN are further narrowed if we consider the cost of the entire model.,reasoning test_1567,"We compute the sum of the vector representations weighted by the probabilities, and then input it into the downstream model.",opTok becomes to assign a high probability to the tokenization which improves the performance of the downstream task.,reasoning test_1568,The goal of this study is to improve the performance of downstream tasks by optimizing the tokenization.,we evaluate OpTok on various text classification tasks to validate its effect.,reasoning test_1569,"Moreover, many studies have reported that training models with a stochastic tokenization lead to a better performance of the downstream tasks than training a model using deterministic tokenization (Kudo, 2018;Hiraoka et al., 2019;Provilkov et al., 2019).",we trained the encoder and downstream model using subword regularization provided by SentencePiece.,reasoning test_1570,It is still unclear whether the optimized tokenization leads to the improvement described in Section 3.3 because we trained all components simultaneously.,we investigate whether the optimized tokenization contributes to the improvement of the performance on the downstream task.,reasoning test_1571,"Sarcasm is prevalent in today's social media platforms, and it can completely flip the polarity of sentiment or opinion.","an effective sarcasm detector is beneficial to applications like sentiment analysis, opinion mining (Pang and Lee, 2007), and other tasks that require people's real sentiment.",reasoning test_1572,Consider the given examples in Figure 1; people can not recognize sarcasm merely from text unless they find the contradiction between text and images.,capturing the incongruity between modalities is significant for multi-modal sarcasm detection.,reasoning test_1573,Tay et al. (2018) argues that the words that contribute to the incongruity (usually accompany with a high attention value) should be highlighted.,a more discriminative pooling operator like max-pooling is desirable in our case.,reasoning test_1574,"Obviously, the methods based on text modality achieve better performance than the method based on image modality.",text information is more useful than image information for sarcasm detection.,reasoning test_1575,Our model is designed to capture the incongruity information.,incongruous regions on the images are more likely to be attended by our model.,reasoning test_1576,"In addition, we find that our model might struggle in those instances requiring external knowledge, such as a speaker's facial gesture or contextual information.",external information is also essential for sarcasm detection.,reasoning test_1577,Different users usually prefer different news information.,"personalized news recommendation, which aims to display news articles to users based on their personal interest, is a useful technique to improve user experience and has been widely used in many online news services (Wu et al., 2019b).",reasoning test_1578,"Besides, these methods represent items using their IDs, and are difficult to handle new items since many news articles are posted every day which are all new items.","these federated learning based recommendation methods have their inherent drawbacks, and are not suitable for news recommendation.",reasoning test_1579,"According to the data processing inequality (McMahan et al., 2017), these gradients never contain more private information than the raw user behaviors, and usually contain much less information (McMahan et al., 2017).",the user privacy can be better protected compared with the centralized storage of user behavior data as did in existing news recommendation methods.,reasoning test_1580,"Moreover, different from these existing news recommendation methods which are all trained on centralized storage of user behavior data, in our FedNewsRec the user behavior data is stored on local user devices and is never uploaded.",our method can train accurate news recommendation model and meanwhile better protect user privacy.,reasoning test_1581,"Luckily, the gap between the performance of FedNewsRec and CenNewsRec is not very big.",our FedNewsRec method can achieve much better privacy protection at the cost of acceptable performance decline.,reasoning test_1582,"However, human ground-truth construction for summarization is time-consuming and laborintensive.",a more flexible summary generation framework could minimize manual labor and generate useful summaries more efficiently.,reasoning test_1583,"Each year, new shared tasks and datasets are proposed, ranging from classics like sentiment analysis to irony detection or emoji prediction.","it is unclear what the current state of the art is, as there is no standardized evaluation protocol, neither a strong set of baselines trained on such domainspecific data.",reasoning test_1584,"However, we acknowledge that other important tasks may need to be evaluated differently.",for future work we would like to include more tasks in the context of social media NLP research.,reasoning test_1585,"Although there are traditional non-neural parsers using n-grams as features to improve parsing (Sagae and Lavie, 2005;Pitler et al., 2010), they are limited in treating them euqally without learning their weights.",unimportant n-grams may deliver misleading information and lead to wrong predictions.,reasoning test_1586,"However, there are cases that long n-grams can play an important role in parsing when they carry useful context and boundary information.","we extend the span attention with a category mechanism (namely, categorical span attention) by grouping n-grams based on their lengths and weighting them within each category.",reasoning test_1587,"It is possible for the programmer to output illegal actions that do not follow the predefined action template (e.g., actions with missing a position component), especially when the programmer is not fully trained.",the interpreter checks if an action is valid and skips invalid actions by returning the input sequence.,reasoning test_1588,"We first experiment with N = 10, L = 5, and D = 10K, but all methods can reach a nearperfect sequence accuracy (see Figure 3).",we adjust N from 10 to 100 to make the task more challenging.,reasoning test_1589,"Note that online training is not part of the standard training procedure for End2end and Tagging, however, we use online training with End2end and Tagging for the sake of a fair comparison.","for End2end and Tagging, the online training acts like a data augmentation technique, providing more data points for training.",reasoning test_1590,"To estimate human performance within each metric, we treat each reference sentence in dev/test data as a ""system prediction"" to be compared with all other references, which is equivalent to compute inter-annotator agreement within each metric.",systems that have better generative ability than average crowd-workers should exceed this.,reasoning test_1591,"Moreover, considering the fact that summarization is not an easy task even for people, reliable human-labeled data are also difficult to obtain.","several unsupervised summarization approaches have been proposed, which do not require reference summaries for the target domain.",reasoning test_1592,"Taking advantage of corpora with billions of tokens, the pretrained language models learn universal and robust representations for various semantic structures and linguistic relationships.","pretrained models have been widely used with considerable success in applications such as question answering (Zhu et al., 2018), sentiment analysis (Peters et al., 2018) and passage reranking (Nogueira and Cho, 2019).",reasoning test_1593,"Gestures can be triggered at the sub-word level; for example, by a change of intonation in acoustics.",it is important to have sub-word level alignment between language and acoustics to generate the freeform gestures.,reasoning test_1594,"In these existing methods, their user models are trained in an end-to-end way using the labeled data of target task, which can only capture task-specific information.","in this paper we propose to pre-train user models from unlabeled user behavior data via self-supervision, which can exploit universal user information encoded in user behaviors.",reasoning test_1595,We simply use the element-wise average operation for f .,"if the segmentation of the word changes, the corresponding embedding and gradient vector will change accordingly.",reasoning test_1596,"However, n-best decoding is n-times time consuming compared to the standard decoding method.",we only use 1-best decoding which is the standard decoding framework for evaluating the translation quality.,reasoning test_1597,Proposition 1 shows that the objective is maximized when the mutual information is maximized to I max .,maximizing the mutual information by other means can help side-step this issue.,reasoning test_1598,"In the biomedical domain, there exist several entities, such as genes, chemicals, and diseases, that are closely related to each other.","extracting the relationships among these entities is critical for biomedical research, particularly in fields such as construction of a knowledge base or drug development.",reasoning test_1599,"In other words, the output probability of the over-confident model does not indicate how uncertain the input example is, even if its classification performance is high.","several approaches, called ""calibration"" techniques, have been applied to several domains that require high reliability, such as autonomous driving and medical diagnosis (Guo et al., 2017;Jiang et al., 2012).",reasoning test_1600,"When a certain level of performance is guaranteed, unlabeled data predicted with a high probability in the classifier is likely to have a corresponding label.","such data can be used as pseudolabeled data, even if it contains slight noise.",reasoning test_1601,"Moreover, we applied self-training on our model to augment the training data and boost the performance, as our model is well-calibrated; it returned reliable output probabilities.",our model outperformed the other chemical-protein relationship extraction models and achieved state-of-the-art performance regarding the Biocreative VI ChemProt task.,reasoning test_1602,"As a meeting consists of utterances from different participants, it forms a natural multi-turn hierarchy.",the hierarchical structure carries out both token-level understanding within each turn and turn-level understanding across the whole meeting.,reasoning test_1603,"As the canonical transformer has the attention mechanism, its computational complexity is quadratic in the input length.","it struggles to handle very long sequences, e.g. 5,000 tokens.",reasoning test_1604,This reconstructor quantifies the clinical correctness of the reports.,we can estimate the correctness of reports without rule-based annotators.,reasoning test_1605,"To train the language model, RL with only CRS and ROUGE as a reward is insufficient.",we use the cross-entropy loss to generate fluent sentences.,reasoning test_1606,This evaluation is intended to assess whether our proposed method is also applicable to the MIMIC-CXR dataset or not.,"in this evaluation, we focus only on the data-to-text module.",reasoning test_1607,"To the best of our knowledge, there are no publicly available entity typing datasets in the medical domain.",three entity typing datasets are constructed from the corresponding medical named entity recognition datasets.,reasoning test_1608,"Long tokens are relatively common in the medical domain, and these tokens will be split into short pieces when a domain-independent vocabulary is used, which will cause an overgeneralization of lexical features.",a medical vocabulary generated by the PubMed corpus can be introduced into BERT-MK in the following work.,reasoning test_1609,The appropriate responses to the user queries are highly dependent on the visual information pertaining to the different aspects of the various images in the conversation.,it is natural to conclude that a conversational agent would be more effective if the visual information were part of its underlying conversational model.,reasoning test_1610,"We come up with the following two top-level principles for domain selection after closer review and extensive discussions: (i). it encompasses a broad group of task-oriented frameworks used by industries/service providers and is likely to build user interfaces; (ii). for deeper comprehension and clarification of the services, the domains need visual details.","we choose to curate conversations belonging to three distinct domains in our newly established large-scale MDMMD dataset1, namely restaurants, electronics, and furniture.",reasoning test_1611,"For example, the cuisine is the aspect category but Chinese is the aspect term that according to the user could change into Mexican, Japanese in the remaining utterances of a particular dialogue.",the labeling of both the aspect category and aspect term is essential for the generation of aspect guided responses to learn the subtle differences between the different aspect terms within the same category.,reasoning test_1612,"Conversely, the larger the L, the closer we are to sampling strategies.","by tuning L, it is possible to combine the advantages of both sampling and likelihood-based strategies.",reasoning test_1613,This is also the basis on which we can continuously expand BERT's ER-Length and continue to benefit.,"for a particular dataset, when we set the ERLength of the BERT, letting it exceed more data's DLength can always bring more improvements.",reasoning test_1614,"In the training phase, the current triplet prediction relies on the gold-standard labels of the previous triplets, while in the testing phase, the current triplet prediction relies on the model prediction of the previous triplets, which can be different from the gold-standard labels.","in the test phase, a skewed prediction will further deviate the predictions of the follow-up triplets; if the decoding length is large, the discrepancy from the gold-standard labels would be further accumulated.",reasoning test_1615,"For the sequence labeling tasks, because the data is relatively small and confined to a very specific domain (chat log from online apparel shops), we set a small vocabulary size of 500 for all the methods except NumAsTok and set the vocabulary size of NumAsTok to 550 to ensure that different methods have similar numbers of parameters for word embedding training.","our methods have (500 + |P|) × D parameters for word embedding training and Nu-mAsTok has 550 × D parameters, where P is the prototype set, whose size is typically smaller than 50, and D is the embedding dimension.",reasoning test_1616,"However, running these models on edge-devices, faces memory and latency issues due to limitations of the hardware.","there has been considerable interest towards research in reducing the memory footprint and faster inference speed for these models (Sainath et al., 2013;Acharya et al., 2019;Shi and Yu, 2018;Jegou et al., 2010;Chen et al., 2018;Winata et al., 2019).",reasoning test_1617,"In practice, state-of-the-art NLP models (Vaswani et al., 2017;Lioutas and Guo, 2020) have shown better performance with parameter sharing between the two (Press and Wolf, 2017).","there is a need for an exhaustive analysis of various embedding compression techniques, with parameter sharing.",reasoning test_1618,"Lastly, embedding compression models not based on linear SVD (Khrulkov et al., 2019;Shi and Yu, 2018) require the reconstruction of the entire embedding matrix or additional computations, when used at the output-layer.","during runtime, the model either uses the same amount of memory as the uncompressed model or pays a higher computation cost.",reasoning test_1619,The consequence would be loss of information on words not seen during training and loss of generalization performance.,adding a loss for embedding reconstruction helps in grounding the embedding and not lose a lot of information.,reasoning test_1620,"Based on on (Chen et al., 2015b; Shao et al., 2017; Zhang et al., 2018), n-gram features are of great benefit to Chinese word segmentation and POS tagging tasks.",we use unigram and bigram embeddings for our models.,reasoning test_1621,Each character can directly attend the criterion-token to be aware of the target criterion.,we can use a single model to produce different segmented results for different criteria.,reasoning test_1622,"In this work, we only adopt the vanilla Transformer encoder since we just want to utilize its selfattention mechanism to model the criterion-aware context representation for each character neatly.","it is promising for future work to look for the more effective adapted Transformer encoder for CWS task or to utilize the pre-trained models (Qiu et al., 2020), such as BERT-based MCCWS (Ke et al., 2020)",reasoning test_1623,However as we show here - existing works are not optimized for dealing with pairs (or tuples) of texts.,they are either not scalable or demonstrate subpar performance.,reasoning test_1624,"For identity-hate, overlap with toxic is 1302/1405.","in this paper, we use the term toxic more generally, subsuming threat and identity-hate as particular types of toxic speech.",reasoning test_1625,The pre-trained models are in principle already complete table entailment predictors.,it is interesting to look at their accuracy on the TABFACT evaluation set before fine-tuning them.,reasoning test_1626,"But in this sentence, a human is able to reason its label to LOCATION easily by recognizing the contextual phrase ""went to"" or ""by car"".",contextual information is crucial to improve model robustness in context level.,reasoning test_1627,"Here, replacing with similar words is more likely to produce valid natural language sentences and thus they should have higher probability during replacement.","instead of getting a random replacement word from the full vocabulary, a word similarity based replacement is proposed and applied here.",reasoning test_1628,"In addition, frequent substitution with extremely different words is not ideal either since it is not likely to produce reasonable sentences.",we want to make the probability distribution of sampling substitutes to focus more on the words which share some similarities with the original words but not far-away.,reasoning test_1629,"Though we keep the semantic latent variable, z s , and switch the gender latent variable, z g , to generate the gender-counterfactual word embedding, their concatenation during decoding can be vulnerable to the semantic information changes because of variances in the individual latent variables.","we constrain that the reconstructed word embedding with the counterfactual gender latent, w cf , differs only in the gender information from w n , which is the reconstructed word embedding with the original gender latent.",reasoning test_1630,"Arguments written in the TL provide a more realistic evaluation set than translated texts, specifically for tasks where labels are not well-preserved across automatic translation.","we created a new multilingual evaluation set by collecting arguments in all 5 languages (ES, FR, IT, DE, and NL) for all the 15 topics of the ArgsEN test set, using the Appen 5 crowdsourcing platform.",reasoning test_1631,"The results showed that many of the annotators labeled a vast majority (>80%) of the arguments as high-quality, even though they were instructed to consider only half as such.",only those labeling ≤ 80% of arguments as highquality were allowed further work.,reasoning test_1632,"Previous approaches typically apply only a single round of attention focusing on simple semantic information In our ADE detection task, instead, key elements of the sentence can be linked to multiple categories of task-specific semantic information of the named entities (ADE, Drug, Indication, Severity, Dose etc.).",single attention is insufficient in exploring this multi-aspect information and consequently risks losing important cues.,reasoning test_1633,"To reduce the word annotation burden, we are interested in understanding whether a word classifier trained on one domain can be applied in another.","we measure crossdomain accuracy, e.g., by fitting the word classifier on IMDB dataset and evaluating on Kindle dataset.",reasoning test_1634,"However, parsing with HRGs is not practical due to its complexity and large number of possible derivations per graph (Groschwitz et al., 2015).","work has looked at ways of constraining the space of possible derivations, usually in the form of align-1 See Gilroy (2019) for an extensive review of the issue.",reasoning test_1635,"So far, the generalization ability of current summarization systems when transferring to new datasets still remains unclear, which poses a significant challenge to design a reliable system in realistic scenarios.","in this work, we take a closer look at the effect of model architectures on cross-dataset generalization setting.",reasoning test_1636,"Despite recent impressive results on diverse summarization datasets, modern summarization systems mainly focus on extensive in-dataset architecture engineering while ignore the generalization ability which is indispensable when systems are required to process samples from new datasets or domains.","instead of evaluating the quality of summarization system solely based on one dataset, we introduce cross-dataset evaluation (a summarizer (e.g., L2L) trained on one dataset (e.g., CNNDM) will be evaluated on a range of other datasets (e.g., XSUM)).",reasoning test_1637,"When served as test set, such dataset brings great challenge for BERT match to correctly rank the can-didate summaries while it provides more training signals when served as training set.",the in-dataset (Bigpatent b) trained model obtain much higher score compared with cross-dataset models which trained from other datasets and cause lower stableness.,reasoning test_1638,"While the authors' analysis suggested the domains were similar enough to justify transfer attempts, only limited post-hoc analysis of the data platform effect was carried out.",it remains unclear to what extent the annotation methodologies as opposed to platform effects (or other confounds) caused the degradation.,reasoning test_1639,"Traditionally, different feature vocabularies account for domain transfer loss (Serra et al., 2017;Chen and Gomes, 2019;Stojanov et al., 2019).",we hypothesize that limited feature overlap and poor vocabulary alignment across datasets could hinder cross-domain generalization.,reasoning test_1640,"On the contrary, the recently proposed VizWiz dataset (Gurari et al., 2018) was collected from blind people taking photos and asking questions about those photos.","the images in VizWiz are often of poor quality, and questions are more conversational with some questions might even be unanswerable due to the poor quality of the images.",reasoning test_1641,"A natural way of verifying the instruction sets from Stage 1 is to have new workers follow them (Chen et al., 2019).","during Stage 2 Verification, a new worker is placed in the environment encountered by the Stage 1 worker and is provided with the NL instructions that were written by that Stage 1 worker.",reasoning test_1642,"As shown in Figure 9, the score of rPOD is decreased according to the placement error (the Manhattan distance) exponentially.","to score high in the rPOD metric, agents should place the target objects as close to the target place as possible.",reasoning test_1643,"Moreover, scoring all code snippets can be computationally inefficient in practice.","we use the method of Yang et al. (2019) to first uniformly sample a subset of data, whose size is much smaller than the entire training set size, and then perform adversarial sampling on this subset.",reasoning test_1644,"KERMIT (Chan et al., 2019) further simplified the Insertion Transformer model by removing the encoder and only having a decoder stack (Vaswani et al., 2017), by concatenating the original input and output sequence as one single sequence and optimizing over all possible factorizations.","KERMIT is able to model the joint p(x, y), conditionals p(x | y), p(y | x), as well as the marginals p(x), p(y).",reasoning test_1645,"Although promising results are obtained, existing models are limited in regarding extra features as gold references and directly concatenate them with word embeddings.","such features are not distinguished and separately treated when they are used in those NER models, where the noise in the extra features (e.g., inaccurate POS tagging results) may hurt model performance.",reasoning test_1646,"Therefore, such features are not distinguished and separately treated when they are used in those NER models, where the noise in the extra features (e.g., inaccurate POS tagging results) may hurt model performance.",it is still a challenge to find an appropriate way to incorporate external information into neural models for NER.,reasoning test_1647,"For example, as illustrated in Figure 2(c), for ""Salt"", its context features are ""Salt"" and ""City"" (the governor of ""Salt""), and their corresponding dependency information are ""Salt compound"" and ""City root"".","for each type of syntactic inforamtion, we obtain a list of context features and a list of syntactic information instances, which are modeled by a KVMN module to enhance input text representation and thus improve model performance.",reasoning test_1648,"Second, on the contrary, SA is able to improve NER with integrating multiple types of syntactic information, where consistent improvements are observed among all datasets when more types of syntactic information are incorporated.",the best results are achieved by the model using all types of syntactic information.,reasoning test_1649,"Later, the syntax attention ensures that the constituent information should be emphasized and the gate mechanism also tends to use syntax for this input with higher weights.",this case clearly illustrates the contribution of each component in our attentive ensemble of syntactic information.,reasoning test_1650,"However, to enhance NER, it is straightforward to incorporate more knowledge to it than only modeling from contexts.","additional resources such as knowledge base (Kazama and Torisawa, 2008; Tkachenko and Simanovsky, 2012; Seyler et al., 2018; Liu et al., 2019b,a; Gui et al., 2019b,a) and syntactic information (McCallum, 2003; Mohit and Hwa, 2005; Finkel and Manning, 2009; Li et al., 2017; Luo et al., 2018; Cetoli et al., 2018; Jie and Lu, 2019) are applied in previous studies",reasoning test_1651,"Processing of legal contracts requires significant human resources due to the complexity of documents, the expertise required and the consequences at stake.","a lot of effort has been made to automate such tasks in order to limit processing costs-notice that law was one of the first areas where electronic information retrieval systems were adopted (Maxwell and Schafer, 2008).",reasoning test_1652,One important advantage over static embeddings is the fact that every occurrence of the same word is assigned a different embedding vector based on the context in which the word is used.,"it is much easier to address issues arising from pre-trained static embeddings (e.g., taking into consideration polysemy of words).",reasoning test_1653,"Our aim is for explanations to better communicate the task model's reasoning process, without adopting the trivial solution, i.e., directly stating its output.","while we optimize explanations for simulatability, we also penalize label leakage, which we formalize below.",reasoning test_1654,"We also try to incorporate the head information in constituent syntactic training process, namely max-margin loss for both two scores, but it makes the training process become more complex and unstable.",we employ a parameter to balance two different scores in joint decoder which is easily implemented with better performance.,reasoning test_1655,"Overall, joint semantic and constituent syntactic parsing achieve relatively better SRL results than the other settings.",the rest of the experiments are done with multi-task learning of semantics and constituent syntactic parsing (wo/dep).,reasoning test_1656,"Besides, LIMIT-BERT takes a semisupervised learning strategy to offer the same large amount of linguistics task data as that for the language model training.","lIMIT-BERT not only improves linguistics tasks performance, but also benefits from a regularization effect and linguistics information that leads to more general representations to help adapt to new tasks and domains.",reasoning test_1657,"(2) Naturally empowered by linguistic clues from joint learning, pre-trained language models will be more powerful for enhancing downstream tasks.","we propose Linguistics Informed Multi-Task BERT (LIMIT-BERT), making an attempt to incorporate linguistic knowledge into pre-training language representation models.",reasoning test_1658,"BERT is typically trained on quite large unlabeled text datasets, BooksCorpus and English Wikipedia, which have 13GB plain text, while the datasets for specific linguistics tasks are less than 100MB.",we employ semi-supervised learning to alleviate such data unbalance on multi-task learning by using a pre-trained linguistics model to label BooksCorpus and English Wikipedia data.,reasoning test_1659,"To absorb both strengths of span and dependency structure, we apply both span (constituent) and dependency representations of semantic role labeling and syntactic parsing.","it is a natural idea to study the relationship between constituent and dependency structures, and the joint learning of constituent and dependency syntactic parsing (Klein and Manning, 2004;Charniak and Johnson, 2005;Farkas et al., 2011;Green andŽabokrtský, 2012;Ren et al., 2013;Xu et al., 2014;Yoshikawa et al., 2017).",reasoning test_1660,"Ideally, we expect that the representation vectors in the deep learning models for ABSA should mainly involve the related information for the aspect terms, the most important words in the sentences.","in this work, we propose to regulate the hidden vectors of the graph-based models for ABSA using the information from the aspect terms, thereby filtering the irrelevant information for the terms and customizing the representation vectors for ABSA.",reasoning test_1661,"In this work, we hypothesize that these overall importance scores from the dependency trees might also provide useful knowledge to improve the representation vectors of the graph-based models for ABSA.",we propose to inject the knowledge from these syntaxbased importance scores into the graph-based models for ABSA via the consistency with the modelbased importance scores.,reasoning test_1662,Solving VLQA examples requires linking information from image and text.,"vLQA can be considered a novel kind of multi-hop task involving images and text, which we believe will drive future vision-language research.",reasoning test_1663,Now we want to determine where is A' located in the image I.,"we formulate a new question Q' as ""Where is A'?",reasoning test_1664,"Encoding hierarchical information from large type inventories has been proven critical to improve performance (Lopez et al., 2019).",we hypothesize that our proposed hyperbolic model will benefit from this representation.,reasoning test_1665,"Because the OneCommon game framework rewards players if they successfully create common ground with each other, players may think to mention to more salient dots to increase the success rate.",the variation of expressions could be restricted.,reasoning test_1666,"However, our architecture is difficult to directly apply to referring expression generation because it outputs modulated feature maps.",the future direction is to extend our architecture to language generation.,reasoning test_1667,"For clusters with only negative edges like the triad in Figure 1c, even though the relation is imbalanced according to the definition, we are unable to determine whether there should be a pair of paraphrases in the graph without knowing the actual semantic meaning of the sentences.",we use the weaker form of structural balance to represent graphs with all negative edges.,reasoning test_1668,"The NELL-995 does not come with a validation set, and therefore we selected 3000 edges randomly from the full NELL KB.",many of the query relations were different from what was present in the splits of NELL-995 and hence is not a good representative.,reasoning test_1669,"The other kind (Zheng et al., 2019) is to generate and classify candidate regions in a two-stage paradigm, often leading to cascaded errors.",region based methods face efficiency and effectiveness challenges.,reasoning test_1670,"Moreover, the Transformer model is inherently much slower than conventional machine translation approaches (e.g., statistical approaches) mainly due to the auto-regressive inference scheme (Graves, 2013) incrementally generating each token.",deploying the Transformer model to mobile devices with limited resources involves numerous practical implementation issues.,reasoning test_1671,This passive and relatively simple dialogue mechanism gains less attention from humans and consumes the interests of human beings rapidly.,some recent researches attempt to endow the bots with proactivity through external knowledge to transform the role from a listener to a speaker with a hypothesis that the speaker expresses more just like a knowledge disseminator.,reasoning test_1672,Two humans take the topic leading role like a speaker to introduce something new in turns.,"in human-machine conversation, the dialogue agent side needs to act as a speaker timely and appropriately.",reasoning test_1673,"When one leads the dialog, the other takes a backseat and forgets more.",we predict the role with this forget gate and generate a response not only on the default decoder state but also on the predicted role simultaneously.,reasoning test_1674,While the listener requires more forgetting to decrease the influence of the knowledge input.,it seems the forget gate here is just a hidden variable which represents the role.,reasoning test_1675,The Initiative-Imitate recognizes the role of the human and controls the knowledge utilizing in the next sentence generation.,it is more engaged in the whole dialog.,reasoning test_1676,"Differently from Chirps, this model makes its event clustering decision based on the predicate, arguments, and the context of the full tweet, as opposed to considering the arguments alone.","we expect it not to cluster predicates whose arguments match lexically, if their contexts or predicates don't match (first example in Table 3).",reasoning test_1677," Following Shwartz et al. 2017, we annotated the templates while presenting 3 argument instantiations from their original tweets.",we only included in the final data predicate pairs with at least 3 supporting pairs.,reasoning test_1678,"However, the extent of annotation and the utility of domain adaptation for training are unknown.",our main question is how successfully can a semantic parser learn with alternative data resources to generalize to novel queries in a new language?,reasoning test_1679,"In initial experiments, we found negligible difference in MT-Paraphrase using random sampling or roundrobin selection of each paraphrase.",we assume that both methods use all available paraphrases over training.,reasoning test_1680,The model trained on MT achieves nearly the same generalization error as the model trained on the gold standard.,we consider the feasibility of our approach justified by this result.,reasoning test_1681,"As previously discussed, we translate only the development and test set of Overnight (Wang et al., 2015) into Chinese and German for assessment of crosslingual semantic parsing in a multi-domain setting.","we translate all 5,473 utterances in ATIS and 4,311 utterances in Overnight.",reasoning test_1682,"It is impossible for a NER system to cover all entity types (Ling and Weld, 2012;Mai et al., 2018).","in the industrial area, it often happens that some entity types required to recognize by the clients are not defined in the previously designed NER system.",reasoning test_1683,"Note that, there is no labeled data for class K of the source task.",a fully supervised learning algorithm is not applicable to train the classifier.,reasoning test_1684,"The difference between the compared work and our work is that, in the compared work, the mention recognition for one entity type is performed independently to the other types through a binary classifier.",it has to resolve the conflict between the recognition results of different binary classifiers for different entity types using a heuristic method at the inference time.,reasoning test_1685,This is because the occurring frequency of mentions of the location type is much lower than the occurring frequency of mentions of the GPE type.,it requires to annotate more data for the location type to cover enough mentions of the type.,reasoning test_1686,"For ranking, the setup is different, as it is not feasible to encode all the candidate documents (from firststage retrieval) into a single input template.",ranking necessitates multiple inference passes with the model and somehow aggregating the outputs.,reasoning test_1687,"We map a relevant document to ""hot"" and a non-relevant document to a completely unrelated word ""orange"".",we force the model to build an arbitrary semantic mapping.,reasoning test_1688,Our propose model OTE-MTL consistently outperforms all state-of-the-art baselines on all datasets with and without OOTs.,we conclude OTE-MTL is effective in dealing with opinion triplet extraction task.,reasoning test_1689,"Previous work has found that if the ratio of biased examples is high, down-weighting, or disregarding all of them results in an insufficient training signal, which leads to performance decreases (Clark et al., 2019;Utama et al., 2020).",we propose a novel multi-bias weighting function that weights each example according to multiple biases and based on each bias' strength in the training domain.,reasoning test_1690,"To apply our framework to training sets that may contain multiple biases of different strengths, we automatically weight the output of the bias models according to the strength of each bias in each training dataset.","we propose a scaling factor F S (B k , D t j ) to automatically control the impact of bias B k in dataset D t j in our debiasing framework, i.e., to reduce the impact of bias on the loss function when the bias is commonly observed in the dataset.",reasoning test_1691,"We observe that (1) different datasets are more affected by certain biases, e.g., the ratio of examples that can be answered without the question (the empty question bias) is 8% in SQuAD while it is 38% in NQ, (2) NewsQA is least affected by biases overall while NQ and HotpotQA are most affected, (3) only few instances are affected by all four biases, and (4) except for NewsQA, the majority of training examples are affected by at least one bias.",methods that down-weight or ignore all biased examples will considerably weaken the overall training signal.,reasoning test_1692,Mahabadi et al. (2020) propose two different methods among which the Debiased Focal Loss (DFL) approach has a better performance.,we use DFL in our comparisons.,reasoning test_1693,"To collect a large corpus of parallel data, heuristic rules are often used but they inevitably let noise into the data, such as phrases in the output which cannot be explained by the input.",models pick up on the noise and may hallucinategenerate fluent but unsupported text.,reasoning test_1694,It may also assign points for a match with the reference which is unsupported by the table.,it can give a wrong estimate of both precision and recall and should be complemented with a human evaluation if two similar performing models are compared.,reasoning test_1695,"These models may highly rely on manual feature engineering, which makes them laborious and time-consuming and are difficult to adapt to new domains.","more and more research (Manning and Eric, 2017; Sukhbaatar et al., 2015; Dodge et al., 2016; Serban et al., 2016; Bordes et al., 2017; Eric and Manning, 2017) dedicated to building end-to-end dialogue systems, in which all their components are trained entirely from the utterances themselves without the need to assume domains or dialog state structure, so it is easy to automatically extend to new domains and free it from manually designed pipeline modules.",reasoning test_1696,Then we use a memory network to encode the results retrieved from KBs.,we can access KBs more efficiently and achieve a high task success rate.,reasoning test_1697,"Intuitively, the more columns are subject to variations, the more diverse the records are.",fewer records will match the query when more columns are subject to variations.,reasoning test_1698,"By employing a subsystem, including a Dialogue State Tracker and a SQL Generator, AirConcierge can issue a precise SQL query at the right time during a dialogue and retrieve relevant data from KBs.","airConcierge can handle large-scale KBs efficiently, in terms of shorter processing time and less memory consumption.",reasoning test_1699,"Unique to our problem, however, is the fact that we have an open set of relation types in the graphs.","we propose a novel graph-conditioned sparse transformer, in which the relation information is embed-ded directly into the self-attention grid.",reasoning test_1700,"From the analysis of our in-house search log, more than 95% of the queries have only one or two nodes, thus a scenario in which more than one edit operation applied is unlikely.",the instances in Modified MSCOCO and GCC are constructed with one edit operation.,reasoning test_1701,"Since our task takes a source graph and a modification query as inputs, we need two encoders to model the graph and text information separately.","there are four main components in our model: the query encoder, the graph encoder, the edge decoder and the node decoder.",reasoning test_1702,"To efficiently encode a graph, we need to encode the information not only from these constituent components, but also their interactions, namely the node-edge association and connectivity.",we incorporate the information from all the edges to the nodes from which these edges are originated.,reasoning test_1703,"Getting annotation from users is expensive, especially for a complex task like our graph modification problem.",we explore the possibility of augmenting the user-generated data with synthetic data in order to train a better model.,reasoning test_1704,"Using Design I for the annotation task, we noticed that workers were not motivated to identify and select different physical entities in motion.","the majority of labeled entities were animate, and few were inanimate entities in motion.",reasoning test_1705,It is difficult to reuse sentence-level templates in new tasks that usually have different requirements.,people usually break long templates into smaller template units (TUs).,reasoning test_1706,"Since the vocabulary of the connection phrases is limited, we automatically generate text stitch training data by dropping certain words in free texts with simple rules.",we can train a high-quality text stitch model in a self-supervised paradigm.,reasoning test_1707,Note that it only inserts connection phrases and preserves all the contents in the input.,"compared with traditional encoder-decoder frameworks that generate texts from scratch, edition-based methods are better fits for this setting.",reasoning test_1708,"We find tokens with POS tags adp, aux, cconj, part, punct, sconj, verb are often parts of a connection phrase.","for each pair of adjacent sentences, we consider each token tok i with these POS tags as an indicator of potential segmentation of two TU instantiations.",reasoning test_1709,"In contrast, TS2 only needs 3 templates to constrain the TU orders and does not need to consider the connection phrases.",significant human efforts are reduced in template design and Q4 is partially answered.,reasoning test_1710,METEOR calculates the precision and recall of the matched words between the generated and reference texts after alignment by taking paraphrases into account.,it is less sensitive to expression variations and content orders than other automatic metrics.,reasoning test_1711,"Additionally, generation methods are not suitable for non-English text owing to a lack of training data because they are heavily dependent on in-language supervision (Ponti et al., 2019).","we adopted the sequence labeling method to maximize scalability by using (multilingual) BERT (Devlin et al., 2019) and multi-head attention (Vaswani et al., 2017).",reasoning test_1712,"Although some studies have demonstrated the potential of multilingual open IE (Faruqui and Kumar, 2015; Gamallo and Garcia, 2015; White et al., 2016), most approaches are based on shallow patterns, resulting in low precision (Claro et al., 2019).",we introduce a multilingual-BERTbased open IE system.,reasoning test_1713,The corpus strives for a general understanding of ST&WR and its textual material should be as diverse as possible.,"we opted to use shorter excerpts from multiple texts rather than longer, complete texts and also tried to represent many different authors, newspapers and magazines.",reasoning test_1714,"As mentioned above, our annotation system shows similarities to the system defined in the influential narratological theory of Genette (2010), and also to that defined by Leech and Short (2013), both fairly formal systems that incorporate linguistic features in their definitions.",they were particularly suited to be adapted for annotation guidelines and also well suited to our other task of developing automatic ST&WR recognizers.,reasoning test_1715,weDH is mainly the user interface for the DHTK library conceived and proposed by Picca and Egloff (2017) ,"if the main purpose of the library is to facilitate the exploitation of textual repositories such as Gutenberg.org along with of LOD resources such as DBpedia, wikidata and VIAF, the web interface has been conceived in order to be exploited by students and practitioners in the human science field with no or few coding skills.",reasoning test_1716,"In dialogue-heavy works, quoted speech can often exceed 50% of a novel’s text (Elson and McKeown, 2010).",quotations are an important structural component of literary texts.,reasoning test_1717,"VGG was designed with the aim of reconstructing the ""polyphony"" of the languages of Italy at war: the official voice of propaganda and the voice of soldiers, the voice of newspapers and the voice of letters, the voice of the elite of intellectuals and the popular voice, the voice of consensus and the voice of dissent, male voices and female voices.","the final corpus is balanced along various dimensions, corresponding to the textual genre, the language variety used, the author type (e.g., sex, education, profession, political orientation etc.",reasoning test_1718,"As a methodological note, in the original corpus the pronoun ""I"" has been resolved to the speaker's name in the process of annotating propositions from locutions (e.g., for the sentence ""I believe Americans do have the ability to give their kids a better future"", ""I believe"" has been replaced with ""O'MALLEY believes"") (Jo et al., 2019).",it is difficult to tell whether the source of a reported speech proposition is indeed the speaker or not.,reasoning test_1719,"However, they use reported speech often, partly because their discussions occurred after the debates had occurred.",these texts often refer back to speech from the debates themselves and the reported speech of the candidates.,reasoning test_1720,Note that the CKY phase is designed to simplify the original RvNN framework that calculates discourse representations recursively according to the tree structure.,"bERT, along with its pretrained linguistic knowledge, can learn the underlying discourse structure itself with raw text segments as inputs.",reasoning test_1721,"It is known that under micro F1 evaluation, different paragraphs are weighted in proportion to the number of their nodes in the discourse trees, while each paragraph is equally weighted under macro evaluation.",we can infer that Our-R takes advantage of predicting local structures.,reasoning test_1722,"In particular, during discussions with teachers where we visualized the collaboration annotations in the corpus that came from their particular classrooms, we found that teachers were very curious about whether students were introducing new information into the discussion or building off of what was previously said.",we experimented on how well a classifier could distinguish student turns labeled 'New' from the other collaboration annotations.,reasoning test_1723,"Additionally, the example illustrates that the connective has no relation sense assigned to it.","in our 2.2 version, we add this relation sense according to the PDTB 3.0 sense hierarchy.",reasoning test_1724,"Because of its complexity, it is hard to get annotated data for training statistical models to perform SDP.","our goal is to produce high-quality annotations other than the standard corpus used in the field-the Penn Discourse Treebank (PDTB) (Prasad et al., 2008)-in order to improve performance on explicit argument extraction.",reasoning test_1725,"In contrast, the neural-network-based model predicted low scores for all columns.",the neural-network-based model is robust against an unexpected essay.,reasoning test_1726,We confirmed the robustness of the BERT model with three essays: an essay with a high/low score and one written with only one character.,we found that the BERT model is more robust against unexpected inputs than the feature-based models.,reasoning test_1727,"They could create a corpus with 16,000 humorous one-liners in English, collected from the Web, while, towards the development of a model of humour recognition, much negative data was available.","four sets of negative examples were gathered, namely: news titles from Reuters; proverbs on the Web; sentences from the British National Corpus (BNC); and sentences from the Open Mind Common Sense project.",reasoning test_1728,"Yet, a system for humour recognition should not be restricted to a single style of humour.","to complement the collected text in the one-liners style, we targeted another kind of short humorous texts: humorous headlines.",reasoning test_1729,"The subtitles feature, which refers to the intermediate titles throughout the text, was excluded because none of the news contained intermediate titles in their text structure.",the remaining 165 features were computed for each text in the corpus.,reasoning test_1730,"As our current approach is supervised, direct comparison is not possible.","we compare them indirectly, assuming that the average accuracy over 2000 article pairs from the unsupervised approach roughly corresponds to the average accuracy in a 10fold cross-validation with 10 repetitions over the same 2000 article pairs.",reasoning test_1731,"To better assess the universality of our systems, knowing that in real-world scenario some text will not have paragraph division, we wanted to explore how much paragraph organization influences the results of our systems on both classification tasks.","we compared classification performances on three different feature sets: surface + shallow, deep, and combination of all three types of features, in two scenarios: using all features of the corresponding type, and excluding the features that require paragraph information, e.g.",reasoning test_1732,"In order to be able to compare the same parts of different texts (which very often have different sizes), our model breaks the aspect flows in a fixed number of frames.","regardless of the number of sentences in a text, the first frame will represent the first part of the text, for example, which we can compare to another text's first part.",reasoning test_1733,"As we can notice in Equations (2) and (6), to calculate MCR and Energy Entropy correctly, the flow must contain, at least, 2 sentences per frame.","this minimum requirement must be considered during the definition of the number of frames and the K parameter (as it requires, at least, one sentence per subframe).",reasoning test_1734,"Legitimate and fake news contain an average of 21 and 14 sentences per document, respectively.","we decided to split the flows into 3 frames, resulting into 7 and 4.67 sentences per frame, on average, for the legitimate and fake news, respectively.",reasoning test_1735,"We observe that many annotations contain punctuation marks, which are considered as separate tokens by spaCy.",we perform the same calculations while ignoring the punctuation tokens.,reasoning test_1736,"Although our findings generally fit with theorized patterns of emotional development, we cannot say for certain whether the current results re-flect changes in felt emotion, emotion vocabulary or, more broadly, changes in the ability to use abstract language.",we are more inclined to cautiously interpret our results as reflecting developmental trends in the distribution of emotion words.,reasoning test_1737,"The main goals of call center conversations are either to pursue a person to sign a contract, or to solve some technical or financial problems.","the question of the evolution of frustration or satisfaction (called satisfaction dimension in the following) along the conversation, is crucial.",reasoning test_1738,For ethical and commercial reasons the agent channel was discarded.,the corpus contains callers' voice only without any overlapping speech.,reasoning test_1739,"In order to do so, having clues about the satisfaction dimension of the caller can be beneficial.",we define a task of satisfaction dimension prediction throughout the conversation.,reasoning test_1740,"On the contrary, if satisfaction is completely differently rated by the three annotators, the CCC computed on each pair is close to 0.","the gold annotation, defined as the mean of the 3 annotation's values, is not consistent.",reasoning test_1741,"Training from the human labeling result, the evaluation model learns which generative models is better in each dialog context.",it can be used for system developers to compare the fine-tuned models over and over again without the human labor.,reasoning test_1742,"However, if an evaluation is fully automatic, then it can be incorporated into a generation system.",it can always generate sentences with higher evaluation value.,reasoning test_1743,"We consider emotions in poetry as they are elicited in the reader, rather than what is expressed in the text or intended by the author.","we conceptualize a set of aesthetic emotions that are predictive of aesthetic appreciation in the reader, and allow the annotation of multiple labels per line to capture mixed emotions within their context.",reasoning test_1744,"Kant (2001) already spoke of a ""feeling of beauty"", and it should be noted that it is not a 'merely pleasing emotion'.","in our pilot annotations, Beauty and Joy were separate labels.",reasoning test_1745,"Furthermore, our pilot annotations revealed, while Beauty is the more dominant and frequent feeling, both labels regularly accompany each other, and they often get confused across annotators.",we add Joy to form an inclusive label Beauty/Joy that increases consistency.,reasoning test_1746,"For the experts, we aggregate their emotion labels on stanza level, then perform the same strategy for selection of emotion labels.","for s, both crowds and experts have 1 or 2 emotions.",reasoning test_1747,None of our models was able to learn this label for German.,"we omit it, leaving us with eight proper labels.",reasoning test_1748,"However, those studies (described in detail below) focus on their particular application rather than addressing the underlying, abstract learning problem (formalized in Section 3.).",previously proposed methods have not been quantitatively compared.,reasoning test_1749,"For MLFFN, we built on the implementation2 and hyperparameter choices Buechel et al. (2018) used for the Empathic Reactions dataset.","mLFFN has two hidden layers (256 and 128 units, respectively) with ReLU activation.",reasoning test_1750,"Accordingly, evaluating the accuracy of automatically detected communicative functions reduces to the evaluation of what is captured in sentence representations.",we propose a task of ranking sentence representations according to a given communicative function.,reasoning test_1751,"Formulaic expressions are extracted from the example expressions by hand, but because they can be very specific or sometimes contain irrelevant content, some queries return no results.",we simplify and shorten the formulaic expressions and obtain what we call the core FEs to retrieve more sentences.,reasoning test_1752,Each sentence has a sentence ID that corresponds to the sentence ID in AASC.,the surrounding context of each sentence can be easily retrieved if a classifier needs it.,reasoning test_1753,"The accuracy indicates how likely evaluators were to choose the correct answers, while the agreement indicates the degree to which they made the same choice.","if the sentence selection in the process of creating the dataset fails to make pairs of sentences with the same communicative functions, the accuracy will be low but the agreement will be high.",reasoning test_1754,"We note that 64.7% of the data showed 100% accuracy, and the accuracy for 84.4% of the data is greater than 75%, which implies that the majority of the quizzes are easy to answer.",the task of detecting the communicative functions of sentences is not too difficult for humans.,reasoning test_1755,"For P Score the correction performance of NONE is multipled with the averaged performance over all error categories, due to the fact that we want a changed output for all error categories instead of the NONE category where we explicitely do not want any changed output.",the correction parameter for NONE can be interpreted as a penality parameter.,reasoning test_1756,"Although some document types may have structured tabular data, we still need a general approach to detect tables on different kinds of documents.",large-scale endto-end deep learning models make it feasible for achieving better performance.,reasoning test_1757,"For instance, we found that models trained on data similar to Figure 1a 1b 1c would not perform well on Figure 1d because the table layouts and colors are so different.",enlarging the training data should be the only way to build open-domain table analysis models with deep learning.,reasoning test_1758,Latex documents are different from Word documents because they need other resources to be compiled into PDF files.,we cannot only crawl the '.tex' file from the internet.,reasoning test_1759,"Although these methods perform well on some documents, they require extensive human efforts to figure out better rules, while sometimes failing to generalize to documents from other sources.",it is inevitable to leverage statistical approaches in table detection.,reasoning test_1760,"The absence of significant success in ad-hoc IR using deep learning approaches is mainly due to the complexity of solving the ranking task using only unlabelled data (Dehghani et al., 2017).",the availability of large amount of labelled data is crucial to develop effective DNNs for ad-hoc IR.,reasoning test_1761,Such datasets are suitable for feature-based learning-to-rank models but not for DNNs that require the original content of queries and documents.,"most of the deep learning model for ad-hoc IR that have been proposed recently are developed using one of the following approaches: (1) Using large amounts of data collected from commercial search engines that are not publicly available (Yang et al., 2019a;Mitra et al., 2017).",reasoning test_1762,They rely on a first ranking stage made by an efficient model such as BM25 and only re-rank the top-k documents for a given query in order to have an efficient search.,wIKIR can be used to run BM25 and save the top-k documents for each query.,reasoning test_1763,"The normalization of taxonomic mentions of bacteria is not much of a challenge for the BioNLP community because the nomenclature is complete, variations are relatively standardized, and synonymy is rare (except in some special cases such as strain names).","string matching with basic variations yields decent results (Grouin, 2016).",reasoning test_1764,We observed no impact of the window size (see Table 1).,we set a short symmetrical window of two tokens in all subsequent experiments.,reasoning test_1765,It can mean that the concepts mentioned in the annotated corpus (whole training and development set) are largely those mentioned in the annotated test corpus.,"both training sets have the same order of training examples for concepts mentioned in the test corpus, which could explain the similar performance.",reasoning test_1766,"In addition, from the perspective of EL solutions, it is observed that (1) EL on short text tends to require excessive hand-crafted features specific to a certain kind of application, which makes it not necessarily applicable to others; and (2) short-text oriented corpus finds itself inappropriate for evaluating the cluster of EL methods based on collective schemes, since short text is unable to supply enough contextual mentions.","long-text oriented corpora are considered to be at least of equal, if not greater, significance to verifying the effectiveness and robustness of EL methods.",reasoning test_1767,"The underlying reason is that mentions in these datasets are not even ambiguous, since most of them were derived from text with high clarity.","there is a pressing need to construct a corpus with a certain level of difficulty, so as to better examine various EL methods.",reasoning test_1768,"Also, we used the referential task for participants fMRI scanning.",it was demonstrated that annotated multichannel corpora like RUPEX can be an important resource for experimental research in interdisciplinary fields.,reasoning test_1769,"As a result, it was demonstrated that annotated multichannel corpora like RUPEX can be an important resource for experimental research in interdisciplinary fields.",different aspects of communication can be explored through the prism of brain activation.,reasoning test_1770,"Authors are highly advised to link news articles to Wikinews categories, to allow effective information organization in Wikinews, and do so only when a category is strongly related to the written article.",the categories can be viewed as salience annotations and entities corresponding to these categories as salient entities.,reasoning test_1771,"For pages that receive enough traffic, reliable user click statistics can be obtained and used to derive entity salience labels.","a dataset called Microsoft Document Aboutness (MDA), was constructed.",reasoning test_1772,"However, human annotated salience labels rely on crowdsourcing, which is usually very expensive.",it is preferred to derive salience labels using automated methods.,reasoning test_1773,"For example, in the example article, the entity mention Kim Jong-un is representing an entity and has corresponding Wikinews category Kim Jong-un.",a wikilink is added to refer to the Wikinews category Kim Jong-un.,reasoning test_1774,"However, we observe that basic statistics show major differences between news articles in different years.","we choose to split the dataset on a monthly basis, i.e., all articles up to a threshold month are placed in the training set, while the remaining articles are placed in the test set.",reasoning test_1775,"However, comments are noise for an evaluation dataset because automatic evaluation methods utilizing corrected sentences typically rely on the matching rate between the system output and the corrected sentences to calculate a score.","in this study, we manually corrected the learner sentences extracted from the Lang-8 corpus using consistent rules and created a highly reliable evaluation corpus for the correction of grammatical errors in Japanese.",reasoning test_1776,"The Lang-8 corpus has often only one corrected sentence per learner sentence, which is not enough for evaluation.",we ensured that our evaluation corpus has multiple references.,reasoning test_1777,The types of errors differ between handwritten sentences and typewritten sentences.,the JPDB and NAIST Misuse Corpus are not suitable as evaluation datasets for correcting grammatical errors in typewritten sentences.,reasoning test_1778,The Japanese portion of the Lang-8 corpus has a wide coverage.,these techniques can also be applied to JSL texts.,reasoning test_1779,Many of the articles in the Lang-8 corpus are written as if the learner writes a diary.,we designed the annotation rules considering the local and global contexts dedicated to Lang-8's register (writing a blog).,reasoning test_1780,"Overall, 11,437 out of 14,671 (78%) time points are indeterminate for abstract TDT timelines and 10,023 out of 14,671 (70%) are indeterminate for full TDT timelines, compared with 8,769 out of 15,623 (56%) for TimeML timelines.",even full TDTs increase temporal indeterminacy significantly compared to TimeML graphs.,reasoning test_1781,Indexing minimizes the amount of expensive pattern matching that must take place at runtime.,"the runtime system matches a syntax-based graph traversal in 2.8 seconds in a corpus of over 134 million sentences, nearly 150,000 times faster than its predecessor.",reasoning test_1782,Their proposed method automatically learns which parts are relevant for a given classication.,their proposed method gives best results without any external help.,reasoning test_1783,"This arbitration is made possible by the hypothesis that Financial Markets might not be as efficient as they are supposed to be in neoclassic economic theory (Fama, 1970), because of information asymmetry.",it is possible to have an unbiased algorithm digging deep into the mass of accessible documents to yield indications about future performances of a company.,reasoning test_1784,"If a company voluntarily hides a vital information for the market's wealth, they fall under the Financial Authority it depends on and might become the subject of a lawsuit for Statement Fraud.",aRs are the most comprehensive public document to evaluate a company potential and strategy.,reasoning test_1785,"We observe that in the first one, the risk section is easily detectable whereas it is not trivial for the second one.","we cannot only use ""risque"" word to isolate the Risk Section in Annual Reports, which can be cause by some risk sentences not containing risk vocabulary but markers of uncertainty.",reasoning test_1786,"We also built a Metadata table containing, for each company, information about its market capitalization and sector of activity, allowing modularity of the corpus for various tasks.","the DoRe corpus can be used for cross-country/dialect, cross industry, cross-size analysis, fraud detection, document segmentation and risk factor extraction but also for pre-training or fine-tuning Language Models on Financial and Economic specific domains for French language and French dialects.",reasoning test_1787,"It is important to note that the relations we have annotated are long-distance relations, connecting often concepts that are mentioned in different sections of the EEG report, or concepts that are in separate sentences.","we believe that by releasing these annotations, we will enable other researchers to make use of the data for many possible downstream applications, not only in the biomedical field, but also in application that rely on information extraction.",reasoning test_1788,Table 2 defines each of the 16 attributes of EEG activities and illustrates the possible values each of these attributes.,any identification of EEG activities in EEG reports amounts to recognizing the 16 attributes listed in Table 2 along with the polarity and modality attributes.,reasoning test_1789,"In EEG reports, certain words tend to be more ambiguous than others, suggesting that computing the encodings of these words requires additional processing to correctly capture their meaning from the contexts in which they appear.","the TNE leverages Adaptive Computation Time (Graves, 2016) to dynamically allocate more computational resources for the encoding of some words compared to others in the same EEG report.",reasoning test_1790,"Knowing the semantic properties of the knowledge that is needed to connect argument units that are -for example -adjacent vs. those that are not, can guide the process of extracting knowledge for filling these gaps.",we want to investigate whether the distribution of the semantic properties we annotated for the inserted sentences -commonsense relation types and semantic clause types -respectively differs depending on the internal structure of an argument.,reasoning test_1791,"We then form the test set (train, development) by collecting all datapoints whose (subject,relation), (object,relation) is in the test( train or development) set pairs.","the test set has (subject,relation), (object,relation) which are not seen during training.",reasoning test_1792,"The existing automatic evaluation methods for these are limited (Lin et al., 2011;Pitler et al., 2010;Ellouze et al., 2017), usually do not take into account the complex and subjective nature of the linguistic quality factors.",we do not focus on these automatic quality measurement tools in this paper.,reasoning test_1793,"However, the crowd is typically composed of people with unknown and very diverse abilities, skills, interests, personal objectives, and technological resources, which lead to several challenges related to lack of control on participants and consistency of output quality in crowdsourcing (Hossfeld et al., 2014).","outputs produced by the crowd must be checked for quality, and so the quality of crowd-based NLP annotations has been repeatedly questioned (Lloret et al., 2018).",reasoning test_1794,"To our knowledge, there is no best practice guideline for summary quality evaluation regarding the optimal number of repetitions per item in crowdsourcing studies used in MOS.",we explore the relationship between the number of repetitions and the correlation coefficient between crowdsourcing and laboratory results to provide a best practice guideline regarding the optimal repetition number in MOS.,reasoning test_1795,"Furthermore, this work does not include any special data cleaning or annotation aggregation method other than the calculating mean values over 24 different judgments for a single item.","further analysis needs to be performed in order to find out the optimal aggregation method along with the corresponding optimal repetition number, such that comparable results to the laboratory can be obtained in a reliable and cost-effective way.",reasoning test_1796,"And, we randomized the order of the judgments five times to lower the effect of lurking variable, so k = 5 where the number 5 was selected arbitrarily.","we got a set of correlation coefficients for unrandomized judgments C measured , and C 1 , C 2 , C 3 , C 4 , and C 5 for five randomizations.",reasoning test_1797,Minor differences in morpheme choice are scored as badly as mistranslating a whole word.,"in addition to BLEU, we have evaluated the MT systems into Inuktitut using YiSi-0 (Lo, 2019), a word-level metric that incorporates character-level information, which has been shown to correlate better with human judgment than BLEU on translation quality into agglutinative languages, such as Finnish and Turkish, at both sentence level and document level (Ma et al., 2018;Ma et al., 2019).",reasoning test_1798,The pdf format is not machine friendly so it is tricky for researchers to work with it.,we convert the original pdf files into plain text files step by step so that they can be used for machine translation or any other computational tasks.,reasoning test_1799,"In preliminary experiments we observed that for the full Wikipedia corpora, relatively few words in the evaluation dataset (discussed in Section 4.) were OOVs, yet OOVs are required for our experimental setup","following Adams et al. (2017) we carried out experiments in which we learned cross-lingual embeddings, but downsized the size of the corpora",reasoning test_1800,"Note that English has the largest corpus among the selected languages, and that we always use the full corpus for the target language, but a sample for the source language.","we expect the embeddings for English as the target language to be higher quality than those for English as the source language, which could explain why the accuracy is higher when English is used as the target language than as the source language.",reasoning test_1801,"However, the actual label of such a statement would be positive because of the author's intention.",the annotators were advised to strictly avoid making annotation decisions based on their own point of view (e.g -personal prejudices) when it comes to such sentences.,reasoning test_1802,"First, despite the third issue mentioned above 50% of the ""no"" answers and 80.8% of the ""yes"" answers of the students to closed questions were indeed correct.","there are good reasons to believe that if the learners were more conservative in their answers to closed questions, fewer dubious ""yes"" answers would be collected and the third issue could be solved.",reasoning test_1803,"Table 3 provides additional insights into the properties of the wordnet, and also compares it to Princeton WordNet, the most complete such resource.","our USGW has its fair share of verbs but is relatively poor in adjectives with respect to the PWN (that has 15% of adjectives), which should be addressed in future work.",reasoning test_1804,"Usually, these phrases are called idiomatic expressions, which is suitable when they are used in an idiomatic sense, but not so much when they are used in a literal sense.","we propose a new term: Potentially Idiomatic Expressions, or PIEs for short.",reasoning test_1805,"Because the PMB contains many very short documents, there might not be enough context to disambiguate the PIEs.","we extract only PIEs with at least 50 characters of context, i.e. the document should be at least 50 characters longer than the span of the PIE. ",reasoning test_1806,"Given the set of candidate instances from the pre-extraction system, the challenge is to get high-quality sense annotations at manageable costs of time and money using the FigureEight platform.",we strive to make the task as easy as possible for crowdworkers.,reasoning test_1807,The balance between idiomatic and other usages of PIEs does not mean much if the overall number of PIEs and/or tokens in the genre is very small.,we also look at the frequency of PIEs relative to the number of tokens.,reasoning test_1808,"As other ASR-systems, Elpis makes use of dictionaries to specify the pronunciation of words.",it cannot be used in the earliest stages of language documentation when most lexical data is still missing.,reasoning test_1809,Time and resources for fieldwork are often very limited.,it makes sense to start on a phonemic level when lexical data are still sparse.,reasoning test_1810,Due to the long-term and large-scale approach it can be expected that additional collections of possibly relevant primary data will appear over the entire project runtime.,it is unforeseeable which data types they will contain and at what point in time they might become relevant for research conducted along with the project.,reasoning test_1811,"To approach both problems in a sustainable way, the decisions in the area of data modeling that will have to be made will deal with the tension between unification of data sets and vocabularies on the one side and maximum openness to future resources and research queries on the other side.",as one solution it was decided to identify standardizable connecting information in the existing project resources and to make sure that connections to internal and external resources of any type can be represented and resolved in the future.,reasoning test_1812,Most native languages in the country have adopted several loanwords given the extended contact with Spanish.,we need to carefully identify and clean the best sentences to build an NLP resource.,reasoning test_1813,"We can find different punctuation marks, bullet entries with or without enumeration, titles/subtitles/headers without any delimiter punctuation in the raw text, among others.",we must be able to handle different kinds of noise to obtain as many correct sentences as possible.,reasoning test_1814,"Similar to the findings of Buck et al. 2014, we noticed that even when the expected language is known, some files contain instructions or entries written in a different language than our targets.","we use a Peruvian language identification tool, developed by Espichan-Linares and Oncevay-Marcos (2017), to label each sentence and drop the texts identified as written in languages out of our scope.",reasoning test_1815,"Following the language identification step, we perform a manual inspection of part of the output and identified specific issues.",we propose different heuristics to clean and prepare a higher-quality corpus.,reasoning test_1816,There are specific sentences with a large number of duplicated tokens (e.g. writing exercises for children in language guides or workbooks).,we compute the ratio per sentence of token types V and number of tokens N to identify the duplication process.,reasoning test_1817,"During the plain text conversion, some of the captured mathematical expressions are incorrectly located within a sentence, which loose its original meaning (see Figure 9 in the Appendix).",we establish a rule where we look for sequences of numbers and operators inside a sentence and remove them from our final corpus.,reasoning test_1818,"In the near future, there will be a demand to extend this to workplace-specific tasks and procedures.","a method of gathering crowdsourced dialogue data is needed that ensures compliance with such procedures, whilst providing coverage of a wide variety of dialogue phenomena that could be observed in deployment of a trained dialogue system.",reasoning test_1819,"Emergency response is clearly a high-stakes situation, which is difficult to emulate in a lab or crowdsourced data collection environment.","in order to foster engagement and collaboration, the scenario was gamified with a monetary reward given for task success.",reasoning test_1820,"However, for scenarios such as ours, the role playing requires a certain expertise and it is questionable whether the desired behaviour would be achieved simply by letting two non-experts converse with free text.","in recent data collections, there have been a number of attempts to control the data quality in order to produce a desired behaviour.",reasoning test_1821,It is quite common for an application to be available in five to ten languages simultaneously.,"to minimize the effort for new applications, the framework offers a Resource Grammars Library (RGL) for many languages (Ranta, 2009).",reasoning test_1822,"It is a small police office which was originally found only in Japan, although now variations exist elsewhere, such as in Singapore, where it is called a neighbourhood police post.",it needed a hyponym relation to the English synset for police station: 03977678n.,reasoning test_1823,Both of these projects do not make use of the full range of etymological relationships present in Wiktionary.,"we were motivated to develop our own Wiktionary parser that is both comprehensive and extensible: it can extract the etymological information and many other types of information annotated in Wiktionary, and it is easy to use and extend for further research.",reasoning test_1824,"Performance on Japanese (ja) beats the high-performing baseline because of a feature of the Japanese writing system: foreign words are written in katakana, while native words are written in hiragana or kanji.",foreign words are easily distinguished as borrowing due to differences in the script.,reasoning test_1825,"Because of this, the simpler models give an incorrect birth year, while the curve fitting model correctly identifies the start of a period of exponential grow around 1960.",the curve-fitting model works well as a model for word emergence.,reasoning test_1826,We assume that the amount of samples is more important.,we investigate this hypothesis by selecting subsets of a 160k generated sample pairs from TRANSLIT.,reasoning test_1827,"The second way a surface word edit can occur is when a compound word is split up into its components; this does not occur a great deal in English and French, but is common in Swedish, German, Icelandic and Dutch.","for example in Kallocain (Swedish), where the action takes place in an imagined totalitarian society, there are many compounds with ""polis"" = ""police"" (""polischef"" = ""police chief"", ""polissekreterare"" = ""police secretary""...), ""tjänst"" = ""service"" (""tjänsteplikt"" = ""service duty"", ""offertjänst"" = ""sacrifice service""...), etc.",reasoning test_1828,"A pattern in the library consists of a list of words, each marked by typecase as being either a surface word or a lemma.","for example in English the phrasal verb ""catch up"" is entered as CATCH up, indicating that ""catch"" can be inflected but not ""up"".",reasoning test_1829,"This last feature was designed intentionally to leverage the power of word embedding techniques, i.e., with the words mapped to an embedding space and the appropriate distance measures, we can easily capture semantically related words to the ones in the lexicons.",we do not need to build comprehensive vocabularies and can focus on the most representative words for each lexicon dimension.,reasoning test_1830,"Raters are supposed to perform impartial and objective evaluations, and they must enter specific comments in order to ground their scores.",for each essay in our dataset we also have the corresponding rater comments.,reasoning test_1831,"More specifically, we learn word embeddings (Mikolov et al., 2013c) for the Portuguese language, and then we employed the Word Mover's Distance function (Kusner et al., 2015) between a comment and the five subjectivity lexicons.","each comment is finally represented by a five-dimensional subjectivity vector, where each dimension corresponds to the amount of a specific type of subjectivity.",reasoning test_1832,"Over the years, thousands of public IT systems have been developed, but they are not communicating in a common language, and there has not previously been a common public plan for how IT systems securely and efficiently can exchange data and become part of a coherent process.","the government, municipalities and regions have agreed that, as part of the public digitization strategy for 2016-20, a common public architecture for secure and efficient data sharing and the development of processes that connect public services must be established.",reasoning test_1833,It occurs when researchers share data on personal websites that become obsolete after time.,we attempt to mitigate that by sharing our corpus on LREC repository.,reasoning test_1834,"Moreover, segmenting Hadith components is a domain-specific task that can be even tricky for the non-specialist.",automating it ensures consistency in segmentation.,reasoning test_1835,One of their current projects is studying forged Hadiths attributed to the prophet to understand the political views at a specific time in history.,"forged Hadiths are being discovered and might keep emerging, which indicates a Hadith segmentation tool is not dealing with a closed set of data.",reasoning test_1836,"KPIs are not only considered useful for measuring progress, but also for collecting feedback on the strategy of a research infrastructure.",in 2018 CLARIN ERIC started work on a framework for KPIs that would help the CLARIN community to describe the progress in developing and operating the research infrastructure for language resources in quantitative terms.,reasoning test_1837,"Because our strings contain logographic material which may not have a clear connection to the phonological representation, alignment is challenging.",we opted for using models based on the neural encoderdecoder architecture.,reasoning test_1838,for a coincidentally similar looking lexeme.,cleaning and more careful selection of the training data would likely have a positive impact on the results.,reasoning test_1839,"Also in syllabic renderings, many transcriptions map unambiguously to one phonological representation.","it would be useful to minimize ambiguity by first using a large dictionary lookup (e.g. consisting of the whole corpus of a given dialect in Oracc), and then trying to predict the correct phonological rendering only if the transcription is clearly ambiguous",reasoning test_1840,"Arguably, a test corpus sampled uniformly over a one year period might be more representative.",we first reserve the 2018 fillings (450k sentence pairs) for validation and test.,reasoning test_1841,"In this experiment, we are interested in measuring the impact of train/test overlapping on NMT performance.",we compare the performance of models trained on randomly picked 2M sentences from sedar-train and tested on 2 variants of the held-out datasets: before and after removing overlapping sentences.,reasoning test_1842,"In most of the work, the focus relies on the models without interpreting the data which performs much better on our own test set rather than on general translated sentences.","it is essential to analyses, correct and cleans the data before using it for the experiments.",reasoning test_1843,"However, to the best of our knowledge, no comparative analysis of MT errors output by PBSMT and NMT has been done for the English and Brazilian Portuguese language pair.",in this article we present the first error analysis of a NMT system's output for Brazilian Portuguese.,reasoning test_1844,"The authors provide multiple analyses of NMT outputs compared to PB-SMT ones, considering different characteristics, such as fluency and reordering.","Toral and Sanchez-Cartagena (2017) state that translations performed by NMt systems tend to be more fluent, however they also observed that quality degrades faster with the sentence length.",reasoning test_1845,"Although tens of thousands of original Japanese documents are disclosed every year (i.e., over 79,000 documents in 2018), the availability of English disclosure documents is limited.",there is a strong demand for machine translation on both listed companies and global investors since Japanese to English translation needs to be done in a timely manner.,reasoning test_1846,"Furthermore, most investors require TSE-listed companies to disclose both Japanese and English documents simultaneously.",it is not easy to meet the demand for the English translation of timely disclosure documents using manual translation only.,reasoning test_1847,"However, the subword tokenization solves the problem only if a rare word can be translated as constitutive words.","even using the subword tokenization, the NMT systems often are often unable to translate neither numbers with many digits nor constitutive proper nouns.",reasoning test_1848,"Taking news domain for example, one entity word usually needs to keep consistent translation across the whole document in newswire.",the gains mainly come from better translation consistency contributed by document context.,reasoning test_1849,"These root, prefix, and suffix combinations are decompositions of the gold standard words contained in the treebank.",our vocabulary is larger than that of the gold standard because of the additional patterns applicable to the prefix-root-suffix combinations.,reasoning test_1850,"For instance, a NOUN in PADT UD may be a noun, proper name or function word as it contains words such as myrAv (inheritance) ""noun"", dwlAr (dollar) ""proper name"", and kl (all) ""function word"".",a word that is analyzed as a noun in PADT UD and analyzed as a function word in our model is marked as a function word and a match occurs.,reasoning test_1851,The analysis length method depends on a heuristic that shorter analyses are more probable than long ones.,complex analyses having many tags will have larger weight (less probability) than simple/short ones.,reasoning test_1852,"As can be seen from this example, some of the subtokens such as H, el,, l are not valid English words, whereas some such as world are.",the effect of subtokenisation on downstream NLP tasks that require the semantics of the original input string to be retained remains unclear.,reasoning test_1853,"Compared to BPE, which produces short and nonsensical subwords, Morfessor is a conservative segmenter that captures longer subwords.",morfessor reports the best performance on entity typing.,reasoning test_1854,"As observed in Table 2, among the different composition methods, SIF method reported the best results across languages.",we use SIF for creating word embeddings from subword embeddings in this experiment.,reasoning test_1855,"Incorporating characterlevel embeddings via LSTMs has shown to improve performance for named entity recognition tasks (Zhai et al., 2018).",applying more sophisticated supervised composition methods such as a recurrent neural network might help to create word embeddings from subtoken embeddings under such situations.,reasoning test_1856,"Because of the corpus' conversational style (Khalifa et al., 2016a), some portions of the text had particularly long sentences.",we split sentences in a cascading fashion with a length of 200 words and a buffer of 10 words at the beginning of the new sentence from the previous one to maintain contextual integrity.,reasoning test_1857,"A limitation of this research paradigm, of course, is that systems that are directly designed to identify morpheme boundaries do not provide more information than the morpheme itself.","it is not possible to tell the exact morphological structure of complex words, including how these complex words are derived from simpler words and what kind of morphological features the morphemes are corresponding to such as prefixes, suffixes, etc.",reasoning test_1858,Domain changes will certainly impact the neighborhoods in the embedding space.,"a comparison of words relative distances in two embedding spaces can be used to measure their degree of domain shift (Kulkarni et al., 2015;Asgari and Mofrad, 2016).",reasoning test_1859,"GP is designed to be uniform across languages with respect to the amount of data per language, the audio quality (microphone, noise, channel), the collection scenario (task, setup, speaking style), as well as the transcription and phone set conventions (IPA-based naming of phones in all pronunciation dictionaries).","gP supplies an excellent basis for research in the areas of (1) multilingual ASR, (2) rapid deployment of speech processing systems to yet unsupported languages, (3) language identification tasks, (4) speaker recognition in multiple languages, (5) multilingual speech synthesis, as well as (6) monolingual ASR.",reasoning test_1860,"However, texts used in our analysis are random sentences selected from different sources, mainly from newspapers.",we computed average TTR (ATTR) for the training transcription based on k disjunct 1000-utterance subsets of the training set instead of MATTR.,reasoning test_1861,"On the other hand, developing large-scale language resources is not economically viable.",alternative approaches need to be used to make Ethiopians benefit from speech and language processing tools.,reasoning test_1862,"We also gather participants' meta-data, including their gender, age, education, occupation, perceptions about CS as well as their personality traits, gathered through the Big-5 Personality Test.","this corpus serves as a useful resource in multiple fields, including NLP applications (mainly designed for ASR systems), linguistic analysis of the CS phenomenon, as well as sociolinguistic and psycholinguistic analyses.",reasoning test_1863,It is claimed that dialectal Arabic was used as a tool to heighten the distinctiveness and distance of Egypt from the rest of the Arab world.,"mSA was facing a threat posed by both, the foreign and dialectal languages.",reasoning test_1864,"It can be seen that 40% of the speakers are within the normal 140-160 range, 30% are below the rate and 30% are above.","for 30% of the interviews, accurate speech recognition for humans and ASR systems would be more challenging",reasoning test_1865,"In order to avoid having the interviewers as common speakers across all sets, their utterances have been placed in the train set.","the train set contains utterances from 4 female and 8 male interviewees, in addition to 1 female and 1 male interviewer.",reasoning test_1866,"Information about participants is also collected, including gender, age, educational background, perceptions about CS and personality traits.",the corpus serves as a useful resource for multiple research directions.,reasoning test_1867,"As mentioned before, the maximum size is 20 times higher than the minimum.","during 1 epoch for 1 sample in, for example, Russian as the target language, the model sees 20 samples in Odia.",reasoning test_1868,We observed that users tend to create ambiguous mentions in tweets when they employ any expression (for example first or last name) other than Twitter screen names to mention other users in their posts (see Section 1).,we have drawn inspiration from this usage to elaborate a simple process for both candidate entity and ambiguous mention generations.,reasoning test_1869,The Twitter Search API 5 returns a collection of relevant tweets matching the specified query.,"for each entity in the KB, i) we set its screen name (@user) as the query search; ii) we collected all the retrieved tweets, and iii) we filtered out tweets without images.",reasoning test_1870,"After a long study, we assumed that a thematic boundary can only be in the vicinity of a slide change during the course.","for each change of slide, a human expert annotated: 1. If there is a topic shift. 2. The exact moment of the topic shift defined as being positioned between two words. 3. The granularity of the topic shift (1 or 2) or if the segment type is an interruption.",reasoning test_1871,"In all, 134 of these occur with images showing flooding (IF), the majority (110) with articles about current flooding or the aftermath of recent flooding where the keyword is also found in the headline of the article.","there are 62 articles about current flooding with a flooding image and 48 articles about the aftermath of a recent flooding, where both the headline and the caption contain one of the keywords.",reasoning test_1872,"However, these same features mean that movies and TV are not representative of day-today life.",these datasets cannot be used to evaluate how well models perform when applied to realistic videos of day-to-day life.,reasoning test_1873,"However, unlike them, our proposed dataset relies on video clips that were recorded naturally by people, without predefined scripts.","understanding videos requires overcoming challenges such as environmental noise, camera movements, lighting conditions, and naturally occurring dialogues.",reasoning test_1874,"We interpret this behaviour as follows: For compounds with low-compositional modifiers the semantic relatedness compound-modifier is low, and here the strength of semantic relatedness compound-head (which is effectively WORD2) correlates with the degree of compositionality of the phrase.",in cases with low compound-modifier relatedness the degree of compositionality of the compound phrase and the compound-head pair are similar in their ranks across compounds.,reasoning test_1875,"BPE tokenization has become a de-facto standard way for processing sub-words in the era of BERT (Devlin et al., 2019) and BERT-like models.","we decided to draw a comparison between BPE tokenization and simpler character-level models, frequently used for segmentation in Chinese (Xue and Shen, 2003) or Arabic (Samih et al., 2017).",reasoning test_1876,"Despite some public pre-trained embedding vectors being already available for Portuguese (Bojanowski et al., 2017;Hartmann et al., 2017;Santos et al., 2019), the highly technical oil and gas vocabulary presents a challenge to Natural Language Processing applications, in which some terms may assume a completely different meaning compared to the general-context domain.","there are consistent evidences that generating embedding models from a domain-specific corpus can significantly increase the quality of their semantic representation and, hence, the performance of NLP applications on specialized downstream tasks on the same domain (Gomes et al., 2018;Nooralahzadeh et al., 2018;Lai et al., 2016).",reasoning test_1877,"However, it is hard to guarantee that each representation contains only the corresponding information.","reconstruction from these two parts directly might cause confliction in both content and sentiment aspects, which leads to poor performance in content preser-vation.",reasoning test_1878,Note that the sentiment representations should be suitable for the semantic content of the template.,the modification should be combined with the contextual information of the template with target sentiment information.,reasoning test_1879,"Since most research focuses on shorter sentence-level texts, it is not clear whether these models can form sufficiently long range dependencies to be useful as a substitute for genuine training data.",we believe that applying NLG approaches to medical text for augmentation purposes is a worthwhile research area in order to ascertain its viability.,reasoning test_1880,"We are still not at a stage yet where we can release de-identified patient data publicly, whether this genuine or synthetic, due to their unquantified susceptibility to re-identification attacks.","even if we develop a model which shows strong performance in generating synthetic clinical notes, these notes cannot be easily shared with the wider research community.",reasoning test_1881,"Although EIEC is annotated with four entity types (Location, Person, Organization and Miscellaneous), the Miscellaneous class is rather sparse, occurring only in a proportion of 1 to 10 with respect to the other three classes.",in the training data there are 156 entities annotated as Miscellaneous whereas for each of the other three classes it contains around 1200 entities.,reasoning test_1882,"was first collected in order to make a cross-cultural comparison of smiling during humorous productions between American English and French (Priego-Valverde et al., 2018).","it was recorded following the American protocol, as closely as possible, especially concerning the tasks given to the participants (see section 2.2).",reasoning test_1883,"They show that, even if smiling is present in the whole conversation, its intensity is higher during humor productions.",the authors hypothesize that an increase in smiling intensity is a more significant way to frame an exchange as humorous than the sole presence of smiling.,reasoning test_1884,"Laughter is far less frequent and, unsurprisingly, displaying a neutral face is rare.","observing smiling in a binary way (presence/absence), tends to confirm the assumption that smiling is a marker of humor.",reasoning test_1885,"On the one hand, smiling is the most present facial behavior both in successful and in failed humor.",it does not seem to reduce the risk of failure.,reasoning test_1886,"Using BLEU, these two sentences would not receive the same score since BLEU is based on orthographic similarity.",bLEU inherently penalizes use of approved terminology even when it is used appropriately.,reasoning test_1887,"The overall architecture of the ontology follows the guiding principles and complies with the data standards for cataloguing cultural objects (Baca, 2006).",it could be easily incorporated in larger projects for ontological modelling of knowledge on cultural and historical heritage.,reasoning test_1888,The object property spokenIn relates a dialect with an administrative or geographic location where this dialect can be found.,"each dialect is related to at least one place, but multiple relations of this type are also conceivable.",reasoning test_1889,"In addition, there is also an interface package (IBGDialectsOntology) which contains all the methods that might be used to extract information from the ontology, and defines the parameters that could be sent by the graphical user interface.",we enable users to choose among different criteria for filtering and extracting information.,reasoning test_1890,The method is public and implements the method getIndividualsInSignature() of the org.semanticweb.owlapi.model.OWLOntology class.,"it returns a list of the type OWLNamedIndividual, which can then be handled by the BgDialectsOnto methods.",reasoning test_1891,"Although the features encoded in the ontology seem rather specific for the historical development of the Bulgarian language and its dialects, the overall design of the hierarchies of classes and properties can be easily adapted and applied to other languages.",it offers scope for valuable contribution that goes beyond national and regional borders.,reasoning test_1892,The taxonomy is a tree structure with the majority of nodes positioned near the bottom of the tree.,"as there are only a handful of nodes near the top, each time the random walk restarts, it is far more likely to start the random walk at a leaf node somewhere at the bottom of the taxonomy, rather than at the top.",reasoning test_1893,"Among the collected reviews, we worked on 6,287 reviews, which were segmented into sentences.","the dataset consists of 17,268 sentences whose length is about 10 tokens on average.",reasoning test_1894,Online reviews are unstructured information and may contain various types of noise.,the process of cleaning and normalization of text is essential for the analysis.,reasoning test_1895,"Since the adversative conjunction guide the reader to more salient information, the argumentative force is combined with the latter statement.","we can categorize the first statement as POS OPINION and the second one as NEG OPINION, and yet attribute more weights on the latter.",reasoning test_1896,"The proposed scheme was specific to restaurants, but it can also be applied to other places such as hotels, vacation spots, shop- ping malls, theaters, etc., particularly considering that getting the visitors to revisit should be one of their primary goals.",a natural extension of this work would be to study the possibility of application of our scheme in other places and different languages.,reasoning test_1897,Nowadays Personal Assistants (PAs) are available in multiple environments and become increasingly popular to use via voice.,we aim to provide proactive PA suggestions to car drivers via speech.,reasoning test_1898,"Especially, when the user drives or is busy with another task at home (e.g., cooking), the interaction with a PA is only the secondary task.","user experience designers need to focus on the user's cognitive load in such settings, too.",reasoning test_1899,"In binary classification framework, we regrouped the reviews as proposed in (Nabil et al., 2014): the reviews associated with one or two stars compose the negative class and those with four or five stars represent the positive class.",the neutral reviews are not considered.,reasoning test_1900,"These embeddings are created by taking into account the morphology of words, instead of treating them as distinct units.","using this method, each word is represented as a sum of the representations of the character N-grams constituting it, giving us the opportunity to experiment with models that focus on N-grams in a sample to perform the classification.",reasoning test_1901,"This is not the case for fastText embeddings, as they are non-contextual by nature.",the non-contextual linguistic features may not add the same amount of advantage to the models using fastText embeddings in comparison to the models that use BERT and CamemBERT embeddings.,reasoning test_1902," Due to this factor, and also because tweets in our collection are not longer than 140 characters11, most of the time only one false assertion is present in the text","the syntactic head of the only false assertion present (or false reported speech) has been easily marked, thus lowering the possibility of disagreement.",reasoning test_1903,"Because of this, occasionally during the voting period, we manually deprioritized the tweets that got three or more negative votes, to keep in the pool only the tweets that still had a chance of being considered positive.","the corpus contains some tweets that do not have five votes, mainly the non-humorous ones.",reasoning test_1904,"We manually inspected all pairs, clustered them into equivalence classes, and took one example from each class discarding the others from the corpus.","we pruned 1,278 tweets from the corpus, most of them were humorous.",reasoning test_1905,"At the same time, finding jokes within in-the-wild long texts can be problematic since you have to account for its boundaries concerning non-humorous content.","we collect jokes from Twitter, supposing a tweet is either completely humorous or not at all.",reasoning test_1906,"A low degree of morphological ambiguity is a planned design feature of Esperanto and, together with its regular inflection and affixation system, meant to make the language easy to learn.","automatic annotation is very reliable at this level, and few ambiguity classes exist, with little need for human revision.",reasoning test_1907,"The only systematic POS ambiguity is between proper nouns and other word classes because of upper-casing (especially in sentence-initial position), and in connection with tokenization errors.","the otherwise reliable vowel coding for POS (e.g. -o = noun, -a = adjective, -i = infinitive, -e = adverb) breaks down in the face of foreign names in (a) and (b).",reasoning test_1908,"However, we try to avoid unnecessary tag complexity by not introducing different syntactic tags, where POS already contains the distinction.","phrase-level modifiers are only attachment-tagged as prenominals (@>N) and postnominals (@NA)1 and post-adjects (@A<), not for what the modifier itself is (e.g. hypothetical @nmod for a modifier that is a nouns), because that would just be duplicated information.",reasoning test_1909,"It should be noted that some of the NE categories are intentionally vague and express ""semantic (lexical) form"" rather than ""semantic function"", leaving the latter to subsequent disambiguation at the semantic role level.","and can fill both agent and location slots, i.e. go to war or raise taxes on the one hand, and be lived in or traveled to on the other.",reasoning test_1910,"German and Danish, because compounding is more transparent in Esperanto than in languages with a lot of idiomatic traits.","the word 'bag' in German ('Tasche') or Danish ('taske') can occur as second part in compounds that do not denote a container, e.g. German 'Plaudertasche' (chatter box) and Danish 'havtaske' (monk fish).",reasoning test_1911,One way to elicit these senses during linguistic data revision was to look for compounds with 'fonto' where the first part can help to disambiguate the sense of the second.,"'akvofonto' (spring) is classified as , 'monfonto' (funding) as (abstract source), 'interretfonto' (online sources) as and 'petrolfonto' (oil well) as (human functional place).",reasoning test_1912,"In a breakdown of individual categories (table 2) ppattachment problems left their predictable mark, with postnominal pp's (PRP @N","19.8% of attachment errors and 26.8% of function errors involved the postnominal category, and 90% of cases were pp's.",reasoning test_1913,"Table 2 contains only the major categories, and it lumps all clause functions into only two groups, finite and nonfinite, but it clearly shows what is difficult for function tagging and for attachment tagging, respectively.","coordinators (@CO) and, to a lesser degree, adverbials (@ADVL) are more an attachment than a labeling problem, while copula complements (@SC) and subjects (@SUBJ) are more a labeling than an attachment problem.",reasoning test_1914,"The resulting value tells us, how much a category is over-represented among errors as compared to its share among running tokens.","the most unreliable categories in terms of function labeling are @N