bibtex_url
stringlengths
41
53
acl_proceedings
stringlengths
38
50
bibtext
stringlengths
528
3.02k
abstract
stringlengths
17
2.35k
authors
sequencelengths
1
44
title
stringlengths
18
190
id
stringlengths
7
19
arxiv_id
stringlengths
10
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
528 values
n_linked_authors
int64
-1
15
upvotes
int64
-1
77
num_comments
int64
-1
10
n_authors
int64
-1
52
Models
sequencelengths
0
100
Datasets
sequencelengths
0
15
Spaces
sequencelengths
0
46
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
https://aclanthology.org/2023.emnlp-main.301.bib
https://aclanthology.org/2023.emnlp-main.301/
@inproceedings{kamath-etal-2023-text, title = "Text encoders bottleneck compositionality in contrastive vision-language models", author = "Kamath, Amita and Hessel, Jack and Chang, Kai-Wei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.301", doi = "10.18653/v1/2023.emnlp-main.301", pages = "4933--4944", abstract = "Performant vision-language (VL) models like CLIP represent captions using a single vector. How much information about language is lost in this bottleneck? We first curate CompPrompts, a set of increasingly compositional image captions that VL models should be able to capture (e.g., single object, to object+property, to multiple interacting objects). Then, we train text-only recovery probes that aim to reconstruct captions from single-vector text representations produced by several VL models. This approach does not require images, allowing us to test on a broader range of scenes compared to prior work. We find that: 1) CLIP{'}s text encoder falls short on more compositional inputs, including object relationships, attribute-object association, counting, and negations; 2) some text encoders work significantly better than others; and 3) text-only recovery performance predicts multimodal matching performance on ControlledImCaps: a new evaluation benchmark we collect and release consisting of fine-grained compositional images and captions. Specifically, our results suggest text-only recoverability is a necessary (but not sufficient) condition for modeling compositional factors in contrastive VL models. We release our datasets and code.", }
Performant vision-language (VL) models like CLIP represent captions using a single vector. How much information about language is lost in this bottleneck? We first curate CompPrompts, a set of increasingly compositional image captions that VL models should be able to capture (e.g., single object, to object+property, to multiple interacting objects). Then, we train text-only recovery probes that aim to reconstruct captions from single-vector text representations produced by several VL models. This approach does not require images, allowing us to test on a broader range of scenes compared to prior work. We find that: 1) CLIP{'}s text encoder falls short on more compositional inputs, including object relationships, attribute-object association, counting, and negations; 2) some text encoders work significantly better than others; and 3) text-only recovery performance predicts multimodal matching performance on ControlledImCaps: a new evaluation benchmark we collect and release consisting of fine-grained compositional images and captions. Specifically, our results suggest text-only recoverability is a necessary (but not sufficient) condition for modeling compositional factors in contrastive VL models. We release our datasets and code.
[ "Kamath, Amita", "Hessel, Jack", "Chang, Kai-Wei" ]
Text encoders bottleneck compositionality in contrastive vision-language models
emnlp-main.301
2305.14897
[ "https://github.com/amitakamath/vl_text_encoders_are_bottlenecks" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.302.bib
https://aclanthology.org/2023.emnlp-main.302/
@inproceedings{schulhoff-etal-2023-ignore, title = "Ignore This Title and {H}ack{AP}rompt: Exposing Systemic Vulnerabilities of {LLM}s Through a Global Prompt Hacking Competition", author = "Schulhoff, Sander and Pinto, Jeremy and Khan, Anaum and Bouchard, Louis-Fran{\c{c}}ois and Si, Chenglei and Anati, Svetlina and Tagliabue, Valen and Kost, Anson and Carnahan, Christopher and Boyd-Graber, Jordan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.302", doi = "10.18653/v1/2023.emnlp-main.302", pages = "4945--4977", abstract = "Large Language Models (LLMs) are increasingly being deployed in interactive contexts that involve direct user engagement, such as chatbots and writing assistants. These deployments are increasingly plagued by prompt injection and jailbreaking (collectively, prompt hacking), in which models are manipulated to ignore their original instructions and instead follow potentially malicious ones. Although widely acknowledged as a significant security threat, there is a dearth of a large-scale resource and quantitative study on prompt hacking. To address this lacuna, we launch a global prompt hacking competition, which allows for free-form human input attacks. We elicit 600K+ adversarial prompts against three state-of-the-art LLMs. We describe the dataset, which empirically verifies that current LLMs can indeed be manipulated via prompt hacking. We also present a comprehensive ontology of the types of adversarial prompts.", }
Large Language Models (LLMs) are increasingly being deployed in interactive contexts that involve direct user engagement, such as chatbots and writing assistants. These deployments are increasingly plagued by prompt injection and jailbreaking (collectively, prompt hacking), in which models are manipulated to ignore their original instructions and instead follow potentially malicious ones. Although widely acknowledged as a significant security threat, there is a dearth of a large-scale resource and quantitative study on prompt hacking. To address this lacuna, we launch a global prompt hacking competition, which allows for free-form human input attacks. We elicit 600K+ adversarial prompts against three state-of-the-art LLMs. We describe the dataset, which empirically verifies that current LLMs can indeed be manipulated via prompt hacking. We also present a comprehensive ontology of the types of adversarial prompts.
[ "Schulhoff, S", "er", "Pinto, Jeremy", "Khan, Anaum", "Bouchard, Louis-Fran{\\c{c}}ois", "Si, Chenglei", "Anati, Svetlina", "Tagliabue, Valen", "Kost, Anson", "Carnahan, Christopher", "Boyd-Graber, Jordan" ]
Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs Through a Global Prompt Hacking Competition
emnlp-main.302
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.303.bib
https://aclanthology.org/2023.emnlp-main.303/
@inproceedings{li-etal-2023-mmnmt, title = "{MMNMT}: Modularizing Multilingual Neural Machine Translation with Flexibly Assembled {M}o{E} and Dense Blocks", author = "Li, Shangjie and Wei, Xiangpeng and Zhu, Shaolin and Xie, Jun and Yang, Baosong and Xiong, Deyi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.303", doi = "10.18653/v1/2023.emnlp-main.303", pages = "4978--4990", abstract = "Mixture-of-Experts (MoE) based sparse architectures can significantly increase model capacity with sublinear computational overhead, which are hence widely used in massively multilingual neural machine translation (MNMT). However, they are prone to overfitting on low-resource language translation. In this paper, we propose a modularized MNMT framework that is able to flexibly assemble dense and MoE-based sparse modules to achieve the best of both worlds. The training strategy of the modularized MNMT framework consists of three stages: (1) Pre-training basic MNMT models with different training objectives or model structures, (2) Initializing modules of the framework with pre-trained couterparts (e.g., encoder, decoder and embedding layers) from the basic models and (3) Fine-tuning the modularized MNMT framework to fit modules from different models together. We pre-train three basic MNMT models from scratch: a dense model, an MoE-based sparse model and a new MoE model, termed as MoE-LGR that explores multiple Language-Group-specifc Routers to incorporate language group knowledge into MNMT. The strengths of these pre-trained models are either on low-resource language translation, high-resource language translation or zero-shot translation. Our modularized MNMT framework attempts to incorporate these advantages into a single model with reasonable initialization and fine-tuning. Experiments on widely-used benchmark datasets demonstrate that the proposed modularized MNMT framwork substantially outperforms both MoE and dense models on high- and low-resource language translation as well as zero-shot translation. Our framework facilitates the combination of different methods with their own strengths and recycling off-the-shelf models for multilingual neural machine translation. Codes are available at https://github.com/lishangjie1/MMNMT.", }
Mixture-of-Experts (MoE) based sparse architectures can significantly increase model capacity with sublinear computational overhead, which are hence widely used in massively multilingual neural machine translation (MNMT). However, they are prone to overfitting on low-resource language translation. In this paper, we propose a modularized MNMT framework that is able to flexibly assemble dense and MoE-based sparse modules to achieve the best of both worlds. The training strategy of the modularized MNMT framework consists of three stages: (1) Pre-training basic MNMT models with different training objectives or model structures, (2) Initializing modules of the framework with pre-trained couterparts (e.g., encoder, decoder and embedding layers) from the basic models and (3) Fine-tuning the modularized MNMT framework to fit modules from different models together. We pre-train three basic MNMT models from scratch: a dense model, an MoE-based sparse model and a new MoE model, termed as MoE-LGR that explores multiple Language-Group-specifc Routers to incorporate language group knowledge into MNMT. The strengths of these pre-trained models are either on low-resource language translation, high-resource language translation or zero-shot translation. Our modularized MNMT framework attempts to incorporate these advantages into a single model with reasonable initialization and fine-tuning. Experiments on widely-used benchmark datasets demonstrate that the proposed modularized MNMT framwork substantially outperforms both MoE and dense models on high- and low-resource language translation as well as zero-shot translation. Our framework facilitates the combination of different methods with their own strengths and recycling off-the-shelf models for multilingual neural machine translation. Codes are available at https://github.com/lishangjie1/MMNMT.
[ "Li, Shangjie", "Wei, Xiangpeng", "Zhu, Shaolin", "Xie, Jun", "Yang, Baosong", "Xiong, Deyi" ]
MMNMT: Modularizing Multilingual Neural Machine Translation with Flexibly Assembled MoE and Dense Blocks
emnlp-main.303
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.304.bib
https://aclanthology.org/2023.emnlp-main.304/
@inproceedings{wu-etal-2023-localizing, title = "Localizing Active Objects from Egocentric Vision with Symbolic World Knowledge", author = "Wu, Te-Lin and Zhou, Yu and Peng, Nanyun", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.304", doi = "10.18653/v1/2023.emnlp-main.304", pages = "4991--5006", abstract = "The ability to actively ground task instructions from an egocentric view is crucial for AI agents to accomplish tasks or assist humans virtually. One important step towards this goal is to localize and track key active objects that undergo major state change as a consequence of human actions/interactions to the environment without being told exactly what/where to ground (e.g., localizing and tracking the {`}sponge{`} in video from the instruction {``}Dip the sponge into the bucket.{''}). While existing works approach this problem from a pure vision perspective, we investigate to which extent the textual modality (i.e., task instructions) and their interaction with visual modality can be beneficial. Specifically, we propose to improve phrase grounding models{'} ability on localizing the active objects by: (1) learning the role of {`}objects undergoing change{`} and extracting them accurately from the instructions, (2) leveraging pre- and post-conditions of the objects during actions, and (3) recognizing the objects more robustly with descriptional knowledge. We leverage large language models (LLMs) to extract the aforementioned action-object knowledge, and design a per-object aggregation masking technique to effectively perform joint inference on object phrases and symbolic knowledge. We evaluate our framework on Ego4D and Epic-Kitchens datasets. Extensive experiments demonstrate the effectiveness of our proposed framework, which leads to{\textgreater}54{\%} improvements in all standard metrics on the TREK-150-OPE-Det localization + tracking task, {\textgreater}7{\%} improvements in all standard metrics on the TREK-150-OPE tracking task, and {\textgreater}3{\%} improvements in average precision (AP) on the Ego4D SCOD task.", }
The ability to actively ground task instructions from an egocentric view is crucial for AI agents to accomplish tasks or assist humans virtually. One important step towards this goal is to localize and track key active objects that undergo major state change as a consequence of human actions/interactions to the environment without being told exactly what/where to ground (e.g., localizing and tracking the {`}sponge{`} in video from the instruction {``}Dip the sponge into the bucket.{''}). While existing works approach this problem from a pure vision perspective, we investigate to which extent the textual modality (i.e., task instructions) and their interaction with visual modality can be beneficial. Specifically, we propose to improve phrase grounding models{'} ability on localizing the active objects by: (1) learning the role of {`}objects undergoing change{`} and extracting them accurately from the instructions, (2) leveraging pre- and post-conditions of the objects during actions, and (3) recognizing the objects more robustly with descriptional knowledge. We leverage large language models (LLMs) to extract the aforementioned action-object knowledge, and design a per-object aggregation masking technique to effectively perform joint inference on object phrases and symbolic knowledge. We evaluate our framework on Ego4D and Epic-Kitchens datasets. Extensive experiments demonstrate the effectiveness of our proposed framework, which leads to{\textgreater}54{\%} improvements in all standard metrics on the TREK-150-OPE-Det localization + tracking task, {\textgreater}7{\%} improvements in all standard metrics on the TREK-150-OPE tracking task, and {\textgreater}3{\%} improvements in average precision (AP) on the Ego4D SCOD task.
[ "Wu, Te-Lin", "Zhou, Yu", "Peng, Nanyun" ]
Localizing Active Objects from Egocentric Vision with Symbolic World Knowledge
emnlp-main.304
2310.15066
[ "https://github.com/pluslabnlp/envision" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.305.bib
https://aclanthology.org/2023.emnlp-main.305/
@inproceedings{bothwell-etal-2023-introducing, title = "Introducing Rhetorical Parallelism Detection: A New Task with Datasets, Metrics, and Baselines", author = {Bothwell, Stephen and DeBenedetto, Justin and Crnkovich, Theresa and M{\"u}ller, Hildegund and Chiang, David}, editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.305", doi = "10.18653/v1/2023.emnlp-main.305", pages = "5007--5039", abstract = "Rhetoric, both spoken and written, involves not only content but also style. One common stylistic tool is $\textit{parallelism}$: the juxtaposition of phrases which have the same sequence of linguistic ($\textit{e.g.}$, phonological, syntactic, semantic) features. Despite the ubiquity of parallelism, the field of natural language processing has seldom investigated it, missing a chance to better understand the nature of the structure, meaning, and intent that humans convey. To address this, we introduce the task of $\textit{rhetorical parallelism detection}$. We construct a formal definition of it; we provide one new Latin dataset and one adapted Chinese dataset for it; we establish a family of metrics to evaluate performance on it; and, lastly, we create baseline systems and novel sequence labeling schemes to capture it. On our strictest metric, we attain F$_1$ scores of 0.40 and 0.43 on our Latin and Chinese datasets, respectively.", }
Rhetoric, both spoken and written, involves not only content but also style. One common stylistic tool is $\textit{parallelism}$: the juxtaposition of phrases which have the same sequence of linguistic ($\textit{e.g.}$, phonological, syntactic, semantic) features. Despite the ubiquity of parallelism, the field of natural language processing has seldom investigated it, missing a chance to better understand the nature of the structure, meaning, and intent that humans convey. To address this, we introduce the task of $\textit{rhetorical parallelism detection}$. We construct a formal definition of it; we provide one new Latin dataset and one adapted Chinese dataset for it; we establish a family of metrics to evaluate performance on it; and, lastly, we create baseline systems and novel sequence labeling schemes to capture it. On our strictest metric, we attain F$_1$ scores of 0.40 and 0.43 on our Latin and Chinese datasets, respectively.
[ "Bothwell, Stephen", "DeBenedetto, Justin", "Crnkovich, Theresa", "M{\\\"u}ller, Hildegund", "Chiang, David" ]
Introducing Rhetorical Parallelism Detection: A New Task with Datasets, Metrics, and Baselines
emnlp-main.305
2312.00100
[ "https://github.com/mythologos/paibi-student-essays" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.306.bib
https://aclanthology.org/2023.emnlp-main.306/
@inproceedings{hu-levy-2023-prompting, title = "Prompting is not a substitute for probability measurements in large language models", author = "Hu, Jennifer and Levy, Roger", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.306", doi = "10.18653/v1/2023.emnlp-main.306", pages = "5040--5060", abstract = "Prompting is now a dominant method for evaluating the linguistic knowledge of large language models (LLMs). While other methods directly read out models{'} probability distributions over strings, prompting requires models to access this internal information by processing linguistic input, thereby implicitly testing a new type of emergent ability: metalinguistic judgment. In this study, we compare metalinguistic prompting and direct probability measurements as ways of measuring models{'} linguistic knowledge. Broadly, we find that LLMs{'} metalinguistic judgments are inferior to quantities directly derived from representations. Furthermore, consistency gets worse as the prompt query diverges from direct measurements of next-word probabilities. Our findings suggest that negative results relying on metalinguistic prompts cannot be taken as conclusive evidence that an LLM lacks a particular linguistic generalization. Our results also highlight the value that is lost with the move to closed APIs where access to probability distributions is limited.", }
Prompting is now a dominant method for evaluating the linguistic knowledge of large language models (LLMs). While other methods directly read out models{'} probability distributions over strings, prompting requires models to access this internal information by processing linguistic input, thereby implicitly testing a new type of emergent ability: metalinguistic judgment. In this study, we compare metalinguistic prompting and direct probability measurements as ways of measuring models{'} linguistic knowledge. Broadly, we find that LLMs{'} metalinguistic judgments are inferior to quantities directly derived from representations. Furthermore, consistency gets worse as the prompt query diverges from direct measurements of next-word probabilities. Our findings suggest that negative results relying on metalinguistic prompts cannot be taken as conclusive evidence that an LLM lacks a particular linguistic generalization. Our results also highlight the value that is lost with the move to closed APIs where access to probability distributions is limited.
[ "Hu, Jennifer", "Levy, Roger" ]
Prompting is not a substitute for probability measurements in large language models
emnlp-main.306
2305.13264
[ "https://github.com/jennhu/metalinguistic-prompting" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.307.bib
https://aclanthology.org/2023.emnlp-main.307/
@inproceedings{jukic-snajder-2023-parameter, title = "Parameter-Efficient Language Model Tuning with Active Learning in Low-Resource Settings", author = "Juki{\'c}, Josip and Snajder, Jan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.307", doi = "10.18653/v1/2023.emnlp-main.307", pages = "5061--5074", abstract = "Pre-trained language models (PLMs) have ignited a surge in demand for effective fine-tuning techniques, particularly in low-resource domains and languages. Active learning (AL), a set of algorithms designed to decrease labeling costs by minimizing label complexity, has shown promise in confronting the labeling bottleneck. In parallel, adapter modules designed for parameter-efficient fine-tuning (PEFT) have demonstrated notable potential in low-resource settings. However, the interplay between AL and adapter-based PEFT remains unexplored. We present an empirical study of PEFT behavior with AL in low-resource settings for text classification tasks. Our findings affirm the superiority of PEFT over full-fine tuning (FFT) in low-resource settings and demonstrate that this advantage persists in AL setups. We further examine the properties of PEFT and FFT through the lens of forgetting dynamics and instance-level representations, where we find that PEFT yields more stable representations of early and middle layers compared to FFT. Our research underscores the synergistic potential of AL and PEFT in low-resource settings, paving the way for advancements in efficient and effective fine-tuning.", }
Pre-trained language models (PLMs) have ignited a surge in demand for effective fine-tuning techniques, particularly in low-resource domains and languages. Active learning (AL), a set of algorithms designed to decrease labeling costs by minimizing label complexity, has shown promise in confronting the labeling bottleneck. In parallel, adapter modules designed for parameter-efficient fine-tuning (PEFT) have demonstrated notable potential in low-resource settings. However, the interplay between AL and adapter-based PEFT remains unexplored. We present an empirical study of PEFT behavior with AL in low-resource settings for text classification tasks. Our findings affirm the superiority of PEFT over full-fine tuning (FFT) in low-resource settings and demonstrate that this advantage persists in AL setups. We further examine the properties of PEFT and FFT through the lens of forgetting dynamics and instance-level representations, where we find that PEFT yields more stable representations of early and middle layers compared to FFT. Our research underscores the synergistic potential of AL and PEFT in low-resource settings, paving the way for advancements in efficient and effective fine-tuning.
[ "Juki{\\'c}, Josip", "Snajder, Jan" ]
Parameter-Efficient Language Model Tuning with Active Learning in Low-Resource Settings
emnlp-main.307
null
[ "https://github.com/josipjukic/adapter-al" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.308.bib
https://aclanthology.org/2023.emnlp-main.308/
@inproceedings{jacovi-etal-2023-stop, title = "Stop Uploading Test Data in Plain Text: Practical Strategies for Mitigating Data Contamination by Evaluation Benchmarks", author = "Jacovi, Alon and Caciularu, Avi and Goldman, Omer and Goldberg, Yoav", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.308", doi = "10.18653/v1/2023.emnlp-main.308", pages = "5075--5084", abstract = "Data contamination has become prevalent and challenging with the rise of models pretrained on large automatically-crawled corpora. For closed models, the training data becomes a trade secret, and even for open models, it is not trivial to detect contamination. Strategies such as leaderboards with hidden answers, or using test data which is guaranteed to be unseen, are expensive and become fragile with time. Assuming that all relevant actors value clean test data and will cooperate to mitigate data contamination, what can be done? We propose three strategies that can make a difference: (1) Test data made public should be encrypted with a public key and licensed to disallow derivative distribution; (2) demand training exclusion controls from closed API holders, and protect your test data by refusing to evaluate without them; (3) avoid data which appears with its solution on the internet, and release the web-page context of internet-derived data along with the data. These strategies are practical and can be effective in preventing data contamination.", }
Data contamination has become prevalent and challenging with the rise of models pretrained on large automatically-crawled corpora. For closed models, the training data becomes a trade secret, and even for open models, it is not trivial to detect contamination. Strategies such as leaderboards with hidden answers, or using test data which is guaranteed to be unseen, are expensive and become fragile with time. Assuming that all relevant actors value clean test data and will cooperate to mitigate data contamination, what can be done? We propose three strategies that can make a difference: (1) Test data made public should be encrypted with a public key and licensed to disallow derivative distribution; (2) demand training exclusion controls from closed API holders, and protect your test data by refusing to evaluate without them; (3) avoid data which appears with its solution on the internet, and release the web-page context of internet-derived data along with the data. These strategies are practical and can be effective in preventing data contamination.
[ "Jacovi, Alon", "Caciularu, Avi", "Goldman, Omer", "Goldberg, Yoav" ]
Stop Uploading Test Data in Plain Text: Practical Strategies for Mitigating Data Contamination by Evaluation Benchmarks
emnlp-main.308
2305.10160
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.309.bib
https://aclanthology.org/2023.emnlp-main.309/
@inproceedings{ainslie-etal-2023-colt5, title = "{C}o{LT}5: Faster Long-Range Transformers with Conditional Computation", author = "Ainslie, Joshua and Lei, Tao and de Jong, Michiel and Ontanon, Santiago and Brahma, Siddhartha and Zemlyanskiy, Yury and Uthus, David and Guo, Mandy and Lee-Thorp, James and Tay, Yi and Sung, Yun-Hsuan and Sanghai, Sumit", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.309", doi = "10.18653/v1/2023.emnlp-main.309", pages = "5085--5100", abstract = "Many natural language processing tasks benefit from long inputs, but processing long documents with Transformers is expensive {--} not only due to quadratic attention complexity but also from applying feedforward and projection layers to every token. However, not all tokens are equally important, especially for longer documents. We propose CoLT5, a long-input Transformer model that builds on this intuition by employing conditional computation, devoting more resources to important tokens in both feedforward and attention layers. We show that CoLT5 achieves stronger performance than LongT5 with much faster training and inference, achieving SOTA on the long-input SCROLLS benchmark. Moreover, CoLT5 can effectively and tractably make use of extremely long inputs, showing strong gains up to 64k input length.", }
Many natural language processing tasks benefit from long inputs, but processing long documents with Transformers is expensive {--} not only due to quadratic attention complexity but also from applying feedforward and projection layers to every token. However, not all tokens are equally important, especially for longer documents. We propose CoLT5, a long-input Transformer model that builds on this intuition by employing conditional computation, devoting more resources to important tokens in both feedforward and attention layers. We show that CoLT5 achieves stronger performance than LongT5 with much faster training and inference, achieving SOTA on the long-input SCROLLS benchmark. Moreover, CoLT5 can effectively and tractably make use of extremely long inputs, showing strong gains up to 64k input length.
[ "Ainslie, Joshua", "Lei, Tao", "de Jong, Michiel", "Ontanon, Santiago", "Brahma, Siddhartha", "Zemlyanskiy, Yury", "Uthus, David", "Guo, M", "y", "Lee-Thorp, James", "Tay, Yi", "Sung, Yun-Hsuan", "Sanghai, Sumit" ]
CoLT5: Faster Long-Range Transformers with Conditional Computation
emnlp-main.309
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.310.bib
https://aclanthology.org/2023.emnlp-main.310/
@inproceedings{venkateswaran-etal-2023-district, title = "{D}i{STRICT}: Dialogue State Tracking with Retriever Driven In-Context Tuning", author = "Venkateswaran, Praveen and Duesterwald, Evelyn and Isahagian, Vatche", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.310", doi = "10.18653/v1/2023.emnlp-main.310", pages = "5101--5112", abstract = "Dialogue State Tracking (DST), a key component of task-oriented conversation systems, represents user intentions by determining the values of pre-defined slots in an ongoing dialogue. Existing approaches use hand-crafted templates and additional slot information to fine-tune and prompt large pre-trained language models and elicit slot values from the dialogue context. Significant manual effort and domain knowledge is required to design effective prompts, limiting the generalizability of these approaches to new domains and tasks. In this work, we propose DiSTRICT, a generalizable in-context tuning approach for DST that retrieves highly relevant training examples for a given dialogue to fine-tune the model without any hand-crafted templates. Experiments with the MultiWOZ benchmark datasets show that DiSTRICT outperforms existing approaches in various zero-shot and few-shot settings using a much smaller model, thereby providing an important advantage for real-world deployments that often have limited resource availability.", }
Dialogue State Tracking (DST), a key component of task-oriented conversation systems, represents user intentions by determining the values of pre-defined slots in an ongoing dialogue. Existing approaches use hand-crafted templates and additional slot information to fine-tune and prompt large pre-trained language models and elicit slot values from the dialogue context. Significant manual effort and domain knowledge is required to design effective prompts, limiting the generalizability of these approaches to new domains and tasks. In this work, we propose DiSTRICT, a generalizable in-context tuning approach for DST that retrieves highly relevant training examples for a given dialogue to fine-tune the model without any hand-crafted templates. Experiments with the MultiWOZ benchmark datasets show that DiSTRICT outperforms existing approaches in various zero-shot and few-shot settings using a much smaller model, thereby providing an important advantage for real-world deployments that often have limited resource availability.
[ "Venkateswaran, Praveen", "Duesterwald, Evelyn", "Isahagian, Vatche" ]
DiSTRICT: Dialogue State Tracking with Retriever Driven In-Context Tuning
emnlp-main.310
2212.02851
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.311.bib
https://aclanthology.org/2023.emnlp-main.311/
@inproceedings{wu-etal-2023-cross, title = "Cross-Cultural Analysis of Human Values, Morals, and Biases in Folk Tales", author = "Wu, Winston and Wang, Lu and Mihalcea, Rada", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.311", doi = "10.18653/v1/2023.emnlp-main.311", pages = "5113--5125", abstract = "Folk tales are strong cultural and social influences in children{'}s lives, and they are known to teach morals and values. However, existing studies on folk tales are largely limited to European tales. In our study, we compile a large corpus of over 1,900 tales originating from 27 diverse cultures across six continents. Using a range of lexicons and correlation analyses, we examine how human values, morals, and gender biases are expressed in folk tales across cultures. We discover differences between cultures in prevalent values and morals, as well as cross-cultural trends in problematic gender biases. Furthermore, we find trends of reduced value expression when examining public-domain fiction stories, extrinsically validate our analyses against the multicultural Schwartz Survey of Cultural Values and the Global Gender Gap Report, and find traditional gender biases associated with values, morals, and agency. This large-scale cross-cultural study of folk tales paves the way towards future studies on how literature influences and reflects cultural norms.", }
Folk tales are strong cultural and social influences in children{'}s lives, and they are known to teach morals and values. However, existing studies on folk tales are largely limited to European tales. In our study, we compile a large corpus of over 1,900 tales originating from 27 diverse cultures across six continents. Using a range of lexicons and correlation analyses, we examine how human values, morals, and gender biases are expressed in folk tales across cultures. We discover differences between cultures in prevalent values and morals, as well as cross-cultural trends in problematic gender biases. Furthermore, we find trends of reduced value expression when examining public-domain fiction stories, extrinsically validate our analyses against the multicultural Schwartz Survey of Cultural Values and the Global Gender Gap Report, and find traditional gender biases associated with values, morals, and agency. This large-scale cross-cultural study of folk tales paves the way towards future studies on how literature influences and reflects cultural norms.
[ "Wu, Winston", "Wang, Lu", "Mihalcea, Rada" ]
Cross-Cultural Analysis of Human Values, Morals, and Biases in Folk Tales
emnlp-main.311
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.312.bib
https://aclanthology.org/2023.emnlp-main.312/
@inproceedings{zhong-etal-2023-non, title = "Non-Programmers Can Label Programs Indirectly via Active Examples: A Case Study with Text-to-{SQL}", author = "Zhong, Ruiqi and Snell, Charlie and Klein, Dan and Eisner, Jason", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.312", doi = "10.18653/v1/2023.emnlp-main.312", pages = "5126--5152", abstract = "Can non-programmers annotate natural language utterances with complex programs that represent their meaning? We introduce APEL, a framework in which non-programmers select among candidate programs generated by a seed semantic parser (e.g., Codex). Since they cannot understand the candidate programs, we ask them to select indirectly by examining the programs{'} input-ouput examples. For each utterance, APEL actively searches for a simple input on which the candidate programs tend to produce different outputs. It then asks the non-programmers only to choose the appropriate output, thus allowing us to infer which program is correct and could be used to fine-tune the parser. As a first case study, we recruited human non-programmers to use APEL to re-annotate SPIDER, a text-to-SQL dataset. Our approach achieved the same annotation accuracy as the original expert annotators (75{\%}) and exposed many subtle errors in the original annotations.", }
Can non-programmers annotate natural language utterances with complex programs that represent their meaning? We introduce APEL, a framework in which non-programmers select among candidate programs generated by a seed semantic parser (e.g., Codex). Since they cannot understand the candidate programs, we ask them to select indirectly by examining the programs{'} input-ouput examples. For each utterance, APEL actively searches for a simple input on which the candidate programs tend to produce different outputs. It then asks the non-programmers only to choose the appropriate output, thus allowing us to infer which program is correct and could be used to fine-tune the parser. As a first case study, we recruited human non-programmers to use APEL to re-annotate SPIDER, a text-to-SQL dataset. Our approach achieved the same annotation accuracy as the original expert annotators (75{\%}) and exposed many subtle errors in the original annotations.
[ "Zhong, Ruiqi", "Snell, Charlie", "Klein, Dan", "Eisner, Jason" ]
Non-Programmers Can Label Programs Indirectly via Active Examples: A Case Study with Text-to-SQL
emnlp-main.312
2205.12422
[ "https://github.com/ruiqi-zhong/emnlp23-apel" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.313.bib
https://aclanthology.org/2023.emnlp-main.313/
@inproceedings{olausson-etal-2023-linc, title = "{LINC}: A Neurosymbolic Approach for Logical Reasoning by Combining Language Models with First-Order Logic Provers", author = "Olausson, Theo and Gu, Alex and Lipkin, Ben and Zhang, Cedegao and Solar-Lezama, Armando and Tenenbaum, Joshua and Levy, Roger", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.313", doi = "10.18653/v1/2023.emnlp-main.313", pages = "5153--5176", abstract = "Logical reasoning, i.e., deductively inferring the truth value of a conclusion from a set of premises, is an important task for artificial intelligence with wide potential impacts on science, mathematics, and society. While many prompting-based strategies have been proposed to enable Large Language Models (LLMs) to do such reasoning more effectively, they still appear unsatisfactory, often failing in subtle and unpredictable ways. In this work, we investigate the validity of instead reformulating such tasks as modular neurosymbolic programming, which we call LINC: Logical Inference via Neurosymbolic Computation. In LINC, the LLM acts as a semantic parser, translating premises and conclusions from natural language to expressions in first-order logic. These expressions are then offloaded to an external theorem prover, which symbolically performs deductive inference. Leveraging this approach, we observe significant performance gains on FOLIO and a balanced subset of ProofWriter for three different models in nearly all experimental conditions we evaluate. On ProofWriter, augmenting the comparatively small open-source StarCoder+ (15.5B parameters) with LINC even outperforms GPT-3.5 and GPT-4 with Chain-of-Thought (CoT) prompting by an absolute 38{\%} and 10{\%}, respectively. When used with GPT-4, LINC scores 26{\%} higher than CoT on ProofWriter while performing comparatively on FOLIO. Further analysis reveals that although both methods on average succeed roughly equally often on this dataset, they exhibit distinct and complementary failure modes. We thus provide promising evidence for how logical reasoning over natural language can be tackled through jointly leveraging LLMs alongside symbolic provers. All corresponding code is publicly available.", }
Logical reasoning, i.e., deductively inferring the truth value of a conclusion from a set of premises, is an important task for artificial intelligence with wide potential impacts on science, mathematics, and society. While many prompting-based strategies have been proposed to enable Large Language Models (LLMs) to do such reasoning more effectively, they still appear unsatisfactory, often failing in subtle and unpredictable ways. In this work, we investigate the validity of instead reformulating such tasks as modular neurosymbolic programming, which we call LINC: Logical Inference via Neurosymbolic Computation. In LINC, the LLM acts as a semantic parser, translating premises and conclusions from natural language to expressions in first-order logic. These expressions are then offloaded to an external theorem prover, which symbolically performs deductive inference. Leveraging this approach, we observe significant performance gains on FOLIO and a balanced subset of ProofWriter for three different models in nearly all experimental conditions we evaluate. On ProofWriter, augmenting the comparatively small open-source StarCoder+ (15.5B parameters) with LINC even outperforms GPT-3.5 and GPT-4 with Chain-of-Thought (CoT) prompting by an absolute 38{\%} and 10{\%}, respectively. When used with GPT-4, LINC scores 26{\%} higher than CoT on ProofWriter while performing comparatively on FOLIO. Further analysis reveals that although both methods on average succeed roughly equally often on this dataset, they exhibit distinct and complementary failure modes. We thus provide promising evidence for how logical reasoning over natural language can be tackled through jointly leveraging LLMs alongside symbolic provers. All corresponding code is publicly available.
[ "Olausson, Theo", "Gu, Alex", "Lipkin, Ben", "Zhang, Cedegao", "Solar-Lezama, Arm", "o", "Tenenbaum, Joshua", "Levy, Roger" ]
LINC: A Neurosymbolic Approach for Logical Reasoning by Combining Language Models with First-Order Logic Provers
emnlp-main.313
2310.15164
[ "https://github.com/benlipkin/linc" ]
https://huggingface.co/papers/2310.15164
1
1
0
7
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.314.bib
https://aclanthology.org/2023.emnlp-main.314/
@inproceedings{ma-etal-2023-non, title = "Non-autoregressive Streaming Transformer for Simultaneous Translation", author = "Ma, Zhengrui and Zhang, Shaolei and Guo, Shoutao and Shao, Chenze and Zhang, Min and Feng, Yang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.314", doi = "10.18653/v1/2023.emnlp-main.314", pages = "5177--5190", abstract = "Simultaneous machine translation (SiMT) models are trained to strike a balance between latency and translation quality. However, training these models to achieve high quality while maintaining low latency often leads to a tendency for aggressive anticipation. We argue that such issue stems from the autoregressive architecture upon which most existing SiMT models are built. To address those issues, we propose non-autoregressive streaming Transformer (NAST) which comprises a unidirectional encoder and a non-autoregressive decoder with intra-chunk parallelism. We enable NAST to generate the blank token or repetitive tokens to adjust its READ/WRITE strategy flexibly, and train it to maximize the non-monotonic latent alignment with an alignment-based latency loss. Experiments on various SiMT benchmarks demonstrate that NAST outperforms previous strong autoregressive SiMT baselines.", }
Simultaneous machine translation (SiMT) models are trained to strike a balance between latency and translation quality. However, training these models to achieve high quality while maintaining low latency often leads to a tendency for aggressive anticipation. We argue that such issue stems from the autoregressive architecture upon which most existing SiMT models are built. To address those issues, we propose non-autoregressive streaming Transformer (NAST) which comprises a unidirectional encoder and a non-autoregressive decoder with intra-chunk parallelism. We enable NAST to generate the blank token or repetitive tokens to adjust its READ/WRITE strategy flexibly, and train it to maximize the non-monotonic latent alignment with an alignment-based latency loss. Experiments on various SiMT benchmarks demonstrate that NAST outperforms previous strong autoregressive SiMT baselines.
[ "Ma, Zhengrui", "Zhang, Shaolei", "Guo, Shoutao", "Shao, Chenze", "Zhang, Min", "Feng, Yang" ]
Non-autoregressive Streaming Transformer for Simultaneous Translation
emnlp-main.314
null
[ "https://github.com/ictnlp/nast" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.315.bib
https://aclanthology.org/2023.emnlp-main.315/
@inproceedings{nguyen-etal-2023-visobert, title = "{V}i{S}o{BERT}: A Pre-Trained Language Model for {V}ietnamese Social Media Text Processing", author = "Nguyen, Nam and Phan, Thang and Nguyen, Duc-Vu and Nguyen, Kiet", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.315", doi = "10.18653/v1/2023.emnlp-main.315", pages = "5191--5207", abstract = "English and Chinese, known as resource-rich languages, have witnessed the strong development of transformer-based language models for natural language processing tasks. Although Vietnam has approximately 100M people speaking Vietnamese, several pre-trained models, e.g., PhoBERT, ViBERT, and vELECTRA, performed well on general Vietnamese NLP tasks, including POS tagging and named entity recognition. These pre-trained language models are still limited to Vietnamese social media tasks. In this paper, we present the first monolingual pre-trained language model for Vietnamese social media texts, ViSoBERT, which is pre-trained on a large-scale corpus of high-quality and diverse Vietnamese social media texts using XLM-R architecture. Moreover, we explored our pre-trained model on five important natural language downstream tasks on Vietnamese social media texts: emotion recognition, hate speech detection, sentiment analysis, spam reviews detection, and hate speech spans detection. Our experiments demonstrate that ViSoBERT, with far fewer parameters, surpasses the previous state-of-the-art models on multiple Vietnamese social media tasks. Our ViSoBERT model is available only for research purposes. Disclaimer: This paper contains actual comments on social networks that might be construed as abusive, offensive, or obscene.", }
English and Chinese, known as resource-rich languages, have witnessed the strong development of transformer-based language models for natural language processing tasks. Although Vietnam has approximately 100M people speaking Vietnamese, several pre-trained models, e.g., PhoBERT, ViBERT, and vELECTRA, performed well on general Vietnamese NLP tasks, including POS tagging and named entity recognition. These pre-trained language models are still limited to Vietnamese social media tasks. In this paper, we present the first monolingual pre-trained language model for Vietnamese social media texts, ViSoBERT, which is pre-trained on a large-scale corpus of high-quality and diverse Vietnamese social media texts using XLM-R architecture. Moreover, we explored our pre-trained model on five important natural language downstream tasks on Vietnamese social media texts: emotion recognition, hate speech detection, sentiment analysis, spam reviews detection, and hate speech spans detection. Our experiments demonstrate that ViSoBERT, with far fewer parameters, surpasses the previous state-of-the-art models on multiple Vietnamese social media tasks. Our ViSoBERT model is available only for research purposes. Disclaimer: This paper contains actual comments on social networks that might be construed as abusive, offensive, or obscene.
[ "Nguyen, Nam", "Phan, Thang", "Nguyen, Duc-Vu", "Nguyen, Kiet" ]
ViSoBERT: A Pre-Trained Language Model for Vietnamese Social Media Text Processing
emnlp-main.315
2310.11166
[ "https://huggingface.co/uitnlp/visobert" ]
https://huggingface.co/papers/2310.11166
0
2
0
4
[]
[]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.316.bib
https://aclanthology.org/2023.emnlp-main.316/
@inproceedings{meng-etal-2023-rapl, title = "{RAPL}: A Relation-Aware Prototype Learning Approach for Few-Shot Document-Level Relation Extraction", author = "Meng, Shiao and Hu, Xuming and Liu, Aiwei and Li, Shuang and Ma, Fukun and Yang, Yawen and Wen, Lijie", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.316", doi = "10.18653/v1/2023.emnlp-main.316", pages = "5208--5226", abstract = "How to identify semantic relations among entities in a document when only a few labeled documents are available? Few-shot document-level relation extraction (FSDLRE) is crucial for addressing the pervasive data scarcity problem in real-world scenarios. Metric-based meta-learning is an effective framework widely adopted for FSDLRE, which constructs class prototypes for classification. However, existing works often struggle to obtain class prototypes with accurate relational semantics: 1) To build prototype for a target relation type, they aggregate the representations of all entity pairs holding that relation, while these entity pairs may also hold other relations, thus disturbing the prototype. 2) They use a set of generic NOTA (none-of-the-above) prototypes across all tasks, neglecting that the NOTA semantics differs in tasks with different target relation types. In this paper, we propose a relation-aware prototype learning method for FSDLRE to strengthen the relational semantics of prototype representations. By judiciously leveraging the relation descriptions and realistic NOTA instances as guidance, our method effectively refines the relation prototypes and generates task-specific NOTA prototypes. Extensive experiments demonstrate that our method outperforms state-of-the-art approaches by average 2.61{\%} $F_1$ across various settings of two FSDLRE benchmarks.", }
How to identify semantic relations among entities in a document when only a few labeled documents are available? Few-shot document-level relation extraction (FSDLRE) is crucial for addressing the pervasive data scarcity problem in real-world scenarios. Metric-based meta-learning is an effective framework widely adopted for FSDLRE, which constructs class prototypes for classification. However, existing works often struggle to obtain class prototypes with accurate relational semantics: 1) To build prototype for a target relation type, they aggregate the representations of all entity pairs holding that relation, while these entity pairs may also hold other relations, thus disturbing the prototype. 2) They use a set of generic NOTA (none-of-the-above) prototypes across all tasks, neglecting that the NOTA semantics differs in tasks with different target relation types. In this paper, we propose a relation-aware prototype learning method for FSDLRE to strengthen the relational semantics of prototype representations. By judiciously leveraging the relation descriptions and realistic NOTA instances as guidance, our method effectively refines the relation prototypes and generates task-specific NOTA prototypes. Extensive experiments demonstrate that our method outperforms state-of-the-art approaches by average 2.61{\%} $F_1$ across various settings of two FSDLRE benchmarks.
[ "Meng, Shiao", "Hu, Xuming", "Liu, Aiwei", "Li, Shuang", "Ma, Fukun", "Yang, Yawen", "Wen, Lijie" ]
RAPL: A Relation-Aware Prototype Learning Approach for Few-Shot Document-Level Relation Extraction
emnlp-main.316
null
[ "https://github.com/thu-bpm/rapl" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.317.bib
https://aclanthology.org/2023.emnlp-main.317/
@inproceedings{li-etal-2023-geolm, title = "{G}eo{LM}: Empowering Language Models for Geospatially Grounded Language Understanding", author = "Li, Zekun and Zhou, Wenxuan and Chiang, Yao-Yi and Chen, Muhao", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.317", doi = "10.18653/v1/2023.emnlp-main.317", pages = "5227--5240", abstract = "Humans subconsciously engage in geospatial reasoning when reading articles. We recognize place names and their spatial relations in text and mentally associate them with their physical locations on Earth. Although pretrained language models can mimic this cognitive process using linguistic context, they do not utilize valuable geospatial information in large, widely available geographical databases, e.g., OpenStreetMap. This paper introduces GeoLM, a geospatially grounded language model that enhances the understanding of geo-entities in natural language. GeoLM leverages geo-entity mentions as anchors to connect linguistic information in text corpora with geospatial information extracted from geographical databases. GeoLM connects the two types of context through contrastive learning and masked language modeling. It also incorporates a spatial coordinate embedding mechanism to encode distance and direction relations to capture geospatial context. In the experiment, we demonstrate that GeoLM exhibits promising capabilities in supporting toponym recognition, toponym linking, relation extraction, and geo-entity typing, which bridge the gap between natural language processing and geospatial sciences. The code is publicly available at https://github.com/knowledge-computing/geolm.", }
Humans subconsciously engage in geospatial reasoning when reading articles. We recognize place names and their spatial relations in text and mentally associate them with their physical locations on Earth. Although pretrained language models can mimic this cognitive process using linguistic context, they do not utilize valuable geospatial information in large, widely available geographical databases, e.g., OpenStreetMap. This paper introduces GeoLM, a geospatially grounded language model that enhances the understanding of geo-entities in natural language. GeoLM leverages geo-entity mentions as anchors to connect linguistic information in text corpora with geospatial information extracted from geographical databases. GeoLM connects the two types of context through contrastive learning and masked language modeling. It also incorporates a spatial coordinate embedding mechanism to encode distance and direction relations to capture geospatial context. In the experiment, we demonstrate that GeoLM exhibits promising capabilities in supporting toponym recognition, toponym linking, relation extraction, and geo-entity typing, which bridge the gap between natural language processing and geospatial sciences. The code is publicly available at https://github.com/knowledge-computing/geolm.
[ "Li, Zekun", "Zhou, Wenxuan", "Chiang, Yao-Yi", "Chen, Muhao" ]
GeoLM: Empowering Language Models for Geospatially Grounded Language Understanding
emnlp-main.317
2310.14478
[ "https://github.com/knowledge-computing/geolm" ]
https://huggingface.co/papers/2310.14478
1
1
0
4
[ "zekun-li/geolm-base-cased" ]
[]
[ "jinwei12/test", "zekun-li/geolm-linking" ]
1
Poster
https://aclanthology.org/2023.emnlp-main.318.bib
https://aclanthology.org/2023.emnlp-main.318/
@inproceedings{alukaev-etal-2023-cross, title = "Cross-Modal Conceptualization in Bottleneck Models", author = "Alukaev, Danis and Kiselev, Semen and Pershin, Ilya and Ibragimov, Bulat and Ivanov, Vladimir and Kornaev, Alexey and Titov, Ivan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.318", doi = "10.18653/v1/2023.emnlp-main.318", pages = "5241--5253", abstract = "Concept Bottleneck Models (CBMs) assume that training examples (e.g., x-ray images) are annotated with high-level concepts (e.g., types of abnormalities), and perform classification by first predicting the concepts, followed by predicting the label relying on these concepts. However, the primary challenge in employing CBMs lies in the requirement of defining concepts predictive of the label and annotating training examples with these concepts. In our approach, we adopt a more moderate assumption and instead use text descriptions (e.g., radiology reports), accompanying the images, to guide the induction of concepts. Our crossmodal approach treats concepts as discrete latent variables and promotes concepts that (1) are predictive of the label, and (2) can be predicted reliably from both the image and text. Through experiments conducted on datasets ranging from synthetic datasets (e.g., synthetic images with generated descriptions) to realistic medical imaging datasets, we demonstrate that crossmodal learning encourages the induction of interpretable concepts while also facilitating disentanglement.", }
Concept Bottleneck Models (CBMs) assume that training examples (e.g., x-ray images) are annotated with high-level concepts (e.g., types of abnormalities), and perform classification by first predicting the concepts, followed by predicting the label relying on these concepts. However, the primary challenge in employing CBMs lies in the requirement of defining concepts predictive of the label and annotating training examples with these concepts. In our approach, we adopt a more moderate assumption and instead use text descriptions (e.g., radiology reports), accompanying the images, to guide the induction of concepts. Our crossmodal approach treats concepts as discrete latent variables and promotes concepts that (1) are predictive of the label, and (2) can be predicted reliably from both the image and text. Through experiments conducted on datasets ranging from synthetic datasets (e.g., synthetic images with generated descriptions) to realistic medical imaging datasets, we demonstrate that crossmodal learning encourages the induction of interpretable concepts while also facilitating disentanglement.
[ "Alukaev, Danis", "Kiselev, Semen", "Pershin, Ilya", "Ibragimov, Bulat", "Ivanov, Vladimir", "Kornaev, Alexey", "Titov, Ivan" ]
Cross-Modal Conceptualization in Bottleneck Models
emnlp-main.318
null
[ "https://github.com/danisalukaev/shapes" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.319.bib
https://aclanthology.org/2023.emnlp-main.319/
@inproceedings{hu-etal-2023-llm, title = "{LLM}-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models", author = "Hu, Zhiqiang and Wang, Lei and Lan, Yihuai and Xu, Wanyu and Lim, Ee-Peng and Bing, Lidong and Xu, Xing and Poria, Soujanya and Lee, Roy", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.319", doi = "10.18653/v1/2023.emnlp-main.319", pages = "5254--5276", abstract = "The success of large language models (LLMs), like GPT-4 and ChatGPT, has led to the development of numerous cost-effective and accessible alternatives that are created by finetuning open-access LLMs with task-specific data (e.g., ChatDoctor) or instruction data (e.g., Alpaca). Among the various fine-tuning methods, adapter-based parameter-efficient fine-tuning (PEFT) is undoubtedly one of the most attractive topics, as it only requires fine-tuning a few external parameters instead of the entire LLMs while achieving comparable or even better performance. To enable further research on PEFT methods of LLMs, this paper presents LLM-Adapters, an easy-to-use framework that integrates various adapters into LLMs and can execute these adapter-based PEFT methods of LLMs for different tasks. The framework includes state-of-the-art open-access LLMs such as LLaMA, BLOOM, and GPT-J, as well as widely used adapters such as Series adapters, Parallel adapter, Prompt-based learning and Reparametrization-based methods. Moreover, we conduct extensive empirical studies on the impact of adapter types, placement locations, and hyper-parameters to the best design for each adapter-based methods. We evaluate the effectiveness of the adapters on fourteen datasets from two different reasoning tasks, Arithmetic Reasoning and Commonsense Reasoning. The results demonstrate that using adapter-based PEFT in smaller-scale LLMs (7B) with few extra trainable parameters yields comparable, and in some cases superior, performance to powerful LLMs (175B) in zero-shot inference on simple math reasoning datasets.", }
The success of large language models (LLMs), like GPT-4 and ChatGPT, has led to the development of numerous cost-effective and accessible alternatives that are created by finetuning open-access LLMs with task-specific data (e.g., ChatDoctor) or instruction data (e.g., Alpaca). Among the various fine-tuning methods, adapter-based parameter-efficient fine-tuning (PEFT) is undoubtedly one of the most attractive topics, as it only requires fine-tuning a few external parameters instead of the entire LLMs while achieving comparable or even better performance. To enable further research on PEFT methods of LLMs, this paper presents LLM-Adapters, an easy-to-use framework that integrates various adapters into LLMs and can execute these adapter-based PEFT methods of LLMs for different tasks. The framework includes state-of-the-art open-access LLMs such as LLaMA, BLOOM, and GPT-J, as well as widely used adapters such as Series adapters, Parallel adapter, Prompt-based learning and Reparametrization-based methods. Moreover, we conduct extensive empirical studies on the impact of adapter types, placement locations, and hyper-parameters to the best design for each adapter-based methods. We evaluate the effectiveness of the adapters on fourteen datasets from two different reasoning tasks, Arithmetic Reasoning and Commonsense Reasoning. The results demonstrate that using adapter-based PEFT in smaller-scale LLMs (7B) with few extra trainable parameters yields comparable, and in some cases superior, performance to powerful LLMs (175B) in zero-shot inference on simple math reasoning datasets.
[ "Hu, Zhiqiang", "Wang, Lei", "Lan, Yihuai", "Xu, Wanyu", "Lim, Ee-Peng", "Bing, Lidong", "Xu, Xing", "Poria, Soujanya", "Lee, Roy" ]
LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models
emnlp-main.319
null
[ "https://github.com/agi-edgerunners/llm-adapters" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.320.bib
https://aclanthology.org/2023.emnlp-main.320/
@inproceedings{ruosch-etal-2023-dream, title = "{DREAM}: Deployment of Recombination and Ensembles in Argument Mining", author = "Ruosch, Florian and Sarasua, Cristina and Bernstein, Abraham", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.320", doi = "10.18653/v1/2023.emnlp-main.320", pages = "5277--5290", abstract = "Current approaches to Argument Mining (AM) tend to take a holistic or black-box view of the overall pipeline. This paper, in contrast, aims to provide a solution to achieve increased performance based on current components instead of independent all-new solutions. To that end, it presents the Deployment of Recombination and Ensemble methods for Argument Miners (DREAM) framework that allows for the (automated) combination of AM components. Using ensemble methods, DREAM combines sets of AM systems to improve accuracy for the four tasks in the AM pipeline. Furthermore, it leverages recombination by using different argument miners elements throughout the pipeline. Experiments with five systems previously included in a benchmark show that the systems combined with DREAM can outperform the previous best single systems in terms of accuracy measured by an AM benchmark.", }
Current approaches to Argument Mining (AM) tend to take a holistic or black-box view of the overall pipeline. This paper, in contrast, aims to provide a solution to achieve increased performance based on current components instead of independent all-new solutions. To that end, it presents the Deployment of Recombination and Ensemble methods for Argument Miners (DREAM) framework that allows for the (automated) combination of AM components. Using ensemble methods, DREAM combines sets of AM systems to improve accuracy for the four tasks in the AM pipeline. Furthermore, it leverages recombination by using different argument miners elements throughout the pipeline. Experiments with five systems previously included in a benchmark show that the systems combined with DREAM can outperform the previous best single systems in terms of accuracy measured by an AM benchmark.
[ "Ruosch, Florian", "Sarasua, Cristina", "Bernstein, Abraham" ]
DREAM: Deployment of Recombination and Ensembles in Argument Mining
emnlp-main.320
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.321.bib
https://aclanthology.org/2023.emnlp-main.321/
@inproceedings{datta-etal-2023-mildsum, title = "{MILDS}um: A Novel Benchmark Dataset for Multilingual Summarization of {I}ndian Legal Case Judgments", author = "Datta, Debtanu and Soni, Shubham and Mukherjee, Rajdeep and Ghosh, Saptarshi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.321", doi = "10.18653/v1/2023.emnlp-main.321", pages = "5291--5302", abstract = "Automatic summarization of legal case judgments is a practically important problem that has attracted substantial research efforts in many countries. In the context of the Indian judiciary, there is an additional complexity {--} Indian legal case judgments are mostly written in complex English, but a significant portion of India{'}s population lacks command of the English language. Hence, it is crucial to summarize the legal documents in Indian languages to ensure equitable access to justice. While prior research primarily focuses on summarizing legal case judgments in their source languages, this study presents a pioneering effort toward cross-lingual summarization of English legal documents into Hindi, the most frequently spoken Indian language. We construct the first high-quality legal corpus comprising of 3,122 case judgments from prominent Indian courts in English, along with their summaries in both English and Hindi, drafted by legal practitioners. We benchmark the performance of several diverse summarization approaches on our corpus and demonstrate the need for further research in cross-lingual summarization in the legal domain.", }
Automatic summarization of legal case judgments is a practically important problem that has attracted substantial research efforts in many countries. In the context of the Indian judiciary, there is an additional complexity {--} Indian legal case judgments are mostly written in complex English, but a significant portion of India{'}s population lacks command of the English language. Hence, it is crucial to summarize the legal documents in Indian languages to ensure equitable access to justice. While prior research primarily focuses on summarizing legal case judgments in their source languages, this study presents a pioneering effort toward cross-lingual summarization of English legal documents into Hindi, the most frequently spoken Indian language. We construct the first high-quality legal corpus comprising of 3,122 case judgments from prominent Indian courts in English, along with their summaries in both English and Hindi, drafted by legal practitioners. We benchmark the performance of several diverse summarization approaches on our corpus and demonstrate the need for further research in cross-lingual summarization in the legal domain.
[ "Datta, Debtanu", "Soni, Shubham", "Mukherjee, Rajdeep", "Ghosh, Saptarshi" ]
MILDSum: A Novel Benchmark Dataset for Multilingual Summarization of Indian Legal Case Judgments
emnlp-main.321
2310.18600
[ "https://github.com/law-ai/mildsum" ]
https://huggingface.co/papers/2310.18600
1
0
0
4
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.322.bib
https://aclanthology.org/2023.emnlp-main.322/
@inproceedings{ma-etal-2023-query, title = "Query Rewriting in Retrieval-Augmented Large Language Models", author = "Ma, Xinbei and Gong, Yeyun and He, Pengcheng and Zhao, Hai and Duan, Nan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.322", doi = "10.18653/v1/2023.emnlp-main.322", pages = "5303--5315", abstract = "Large Language Models (LLMs) play powerful, black-box readers in the retrieve-then-read pipeline, making remarkable progress in knowledge-intensive tasks. This work introduces a new framework, Rewrite-Retrieve-Read instead of the previous retrieve-then-read for the retrieval-augmented LLMs from the perspective of the query rewriting. Unlike prior studies focusing on adapting either the retriever or the reader, our approach pays attention to the adaptation of the search query itself, for there is inevitably a gap between the input text and the needed knowledge in retrieval. We first prompt an LLM to generate the query, then use a web search engine to retrieve contexts. Furthermore, to better align the query to the frozen modules, we propose a trainable scheme for our pipeline. A small language model is adopted as a trainable rewriter to cater to the black-box LLM reader. The rewriter is trained using the feedback of the LLM reader by reinforcement learning. Evaluation is conducted on downstream tasks, open-domain QA and multiple-choice QA. Experiments results show consistent performance improvement, indicating that our framework is proven effective and scalable, and brings a new framework for retrieval-augmented LLM.", }
Large Language Models (LLMs) play powerful, black-box readers in the retrieve-then-read pipeline, making remarkable progress in knowledge-intensive tasks. This work introduces a new framework, Rewrite-Retrieve-Read instead of the previous retrieve-then-read for the retrieval-augmented LLMs from the perspective of the query rewriting. Unlike prior studies focusing on adapting either the retriever or the reader, our approach pays attention to the adaptation of the search query itself, for there is inevitably a gap between the input text and the needed knowledge in retrieval. We first prompt an LLM to generate the query, then use a web search engine to retrieve contexts. Furthermore, to better align the query to the frozen modules, we propose a trainable scheme for our pipeline. A small language model is adopted as a trainable rewriter to cater to the black-box LLM reader. The rewriter is trained using the feedback of the LLM reader by reinforcement learning. Evaluation is conducted on downstream tasks, open-domain QA and multiple-choice QA. Experiments results show consistent performance improvement, indicating that our framework is proven effective and scalable, and brings a new framework for retrieval-augmented LLM.
[ "Ma, Xinbei", "Gong, Yeyun", "He, Pengcheng", "Zhao, Hai", "Duan, Nan" ]
Query Rewriting in Retrieval-Augmented Large Language Models
emnlp-main.322
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.323.bib
https://aclanthology.org/2023.emnlp-main.323/
@inproceedings{sahu-etal-2023-promptmix, title = "{P}rompt{M}ix: A Class Boundary Augmentation Method for Large Language Model Distillation", author = "Sahu, Gaurav and Vechtomova, Olga and Bahdanau, Dzmitry and Laradji, Issam", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.323", doi = "10.18653/v1/2023.emnlp-main.323", pages = "5316--5327", abstract = "Data augmentation is a widely used technique to address the problem of text classification when there is a limited amount of training data. Recent work often tackles this problem using large language models (LLMs) like GPT3 that can generate new examples given already available ones. In this work, we propose a method to generate more helpful augmented data by utilizing the LLM{'}s abilities to follow instructions and perform few-shot classifications. Our specific PromptMix method consists of two steps: 1) generate challenging text augmentations near class boundaries; however, generating borderline examples increases the risk of false positives in the dataset, so we 2) relabel the text augmentations using a prompting-based LLM classifier to enhance the correctness of labels in the generated data. We evaluate the proposed method in challenging 2-shot and zero-shot settings on four text classification datasets: Banking77, TREC6, Subjectivity (SUBJ), and Twitter Complaints. Our experiments show that generating and, crucially, relabeling borderline examples facilitates the transfer of knowledge of a massive LLM like GPT3.5-turbo into smaller and cheaper classifiers like DistilBERT-base and BERT-base. Furthermore, 2-shot PromptMix outperforms multiple 5-shot data augmentation methods on the four datasets. Our code is available at https://github.com/ServiceNow/PromptMix-EMNLP-2023.", }
Data augmentation is a widely used technique to address the problem of text classification when there is a limited amount of training data. Recent work often tackles this problem using large language models (LLMs) like GPT3 that can generate new examples given already available ones. In this work, we propose a method to generate more helpful augmented data by utilizing the LLM{'}s abilities to follow instructions and perform few-shot classifications. Our specific PromptMix method consists of two steps: 1) generate challenging text augmentations near class boundaries; however, generating borderline examples increases the risk of false positives in the dataset, so we 2) relabel the text augmentations using a prompting-based LLM classifier to enhance the correctness of labels in the generated data. We evaluate the proposed method in challenging 2-shot and zero-shot settings on four text classification datasets: Banking77, TREC6, Subjectivity (SUBJ), and Twitter Complaints. Our experiments show that generating and, crucially, relabeling borderline examples facilitates the transfer of knowledge of a massive LLM like GPT3.5-turbo into smaller and cheaper classifiers like DistilBERT-base and BERT-base. Furthermore, 2-shot PromptMix outperforms multiple 5-shot data augmentation methods on the four datasets. Our code is available at https://github.com/ServiceNow/PromptMix-EMNLP-2023.
[ "Sahu, Gaurav", "Vechtomova, Olga", "Bahdanau, Dzmitry", "Laradji, Issam" ]
PromptMix: A Class Boundary Augmentation Method for Large Language Model Distillation
emnlp-main.323
null
[ "https://github.com/servicenow/promptmix-emnlp-2023" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.324.bib
https://aclanthology.org/2023.emnlp-main.324/
@inproceedings{maimon-tsarfaty-2023-cohesentia, title = "{COHESENTIA}: A Novel Benchmark of Incremental versus Holistic Assessment of Coherence in Generated Texts", author = "Maimon, Aviya and Tsarfaty, Reut", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.324", doi = "10.18653/v1/2023.emnlp-main.324", pages = "5328--5343", abstract = "Coherence is a linguistic term that refers to the relations between small textual units (sentences, propositions), which make the text logically consistent and meaningful to the reader. With the advances of generative foundational models in NLP, there is a pressing need to automatically assess the human-perceived coherence of automatically generated texts. Up until now, little work has been done on explicitly assessing the coherence of generated texts and analyzing the factors contributing to (in)coherence. Previous work on the topic used other tasks, e.g., sentence reordering, as proxies of coherence, rather than approaching coherence detection heads on. In this paper, we introduce CoheSentia, a novel benchmark of human-perceived coherence of automatically generated texts. Our annotation protocol reflects two perspectives; one is global, assigning a single coherence score, and the other is incremental, scoring sentence by sentence. The incremental method produces an (in)coherence score for each text fragment and also pinpoints reasons for incoherence at that point. Our benchmark contains 500 automatically-generated and human-annotated paragraphs, each annotated in both methods, by multiple raters. Our analysis shows that the inter-annotator agreement in the incremental mode is higher than in the holistic alternative, and our experiments show that standard LMs fine-tuned for coherence detection show varied performance on the different factors contributing to (in)coherence. All in all, these models yield unsatisfactory performance, emphasizing the need for developing more reliable methods for coherence assessment.", }
Coherence is a linguistic term that refers to the relations between small textual units (sentences, propositions), which make the text logically consistent and meaningful to the reader. With the advances of generative foundational models in NLP, there is a pressing need to automatically assess the human-perceived coherence of automatically generated texts. Up until now, little work has been done on explicitly assessing the coherence of generated texts and analyzing the factors contributing to (in)coherence. Previous work on the topic used other tasks, e.g., sentence reordering, as proxies of coherence, rather than approaching coherence detection heads on. In this paper, we introduce CoheSentia, a novel benchmark of human-perceived coherence of automatically generated texts. Our annotation protocol reflects two perspectives; one is global, assigning a single coherence score, and the other is incremental, scoring sentence by sentence. The incremental method produces an (in)coherence score for each text fragment and also pinpoints reasons for incoherence at that point. Our benchmark contains 500 automatically-generated and human-annotated paragraphs, each annotated in both methods, by multiple raters. Our analysis shows that the inter-annotator agreement in the incremental mode is higher than in the holistic alternative, and our experiments show that standard LMs fine-tuned for coherence detection show varied performance on the different factors contributing to (in)coherence. All in all, these models yield unsatisfactory performance, emphasizing the need for developing more reliable methods for coherence assessment.
[ "Maimon, Aviya", "Tsarfaty, Reut" ]
COHESENTIA: A Novel Benchmark of Incremental versus Holistic Assessment of Coherence in Generated Texts
emnlp-main.324
2310.16329
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.325.bib
https://aclanthology.org/2023.emnlp-main.325/
@inproceedings{wu-etal-2023-qudeval, title = "{QUD}eval: The Evaluation of Questions Under Discussion Discourse Parsing", author = "Wu, Yating and Mangla, Ritika and Durrett, Greg and Li, Junyi Jessy", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.325", doi = "10.18653/v1/2023.emnlp-main.325", pages = "5344--5363", abstract = "Questions Under Discussion (QUD) is a versatile linguistic framework in which discourse progresses as continuously asking questions and answering them. Automatic parsing of a discourse to produce a QUD structure thus entails a complex question generation task: given a document and an answer sentence, generate a question that satisfies linguistic constraints of QUD and can be grounded in an anchor sentence in prior context. These questions are known to be curiosity-driven and open-ended. This work introduces the first framework for the automatic evaluation of QUD parsing, instantiating the theoretical constraints of QUD in a concrete protocol. We present QUDeval, a dataset of fine-grained evaluation of 2,190 QUD questions generated from both fine-tuned systems and LLMs. Using QUDeval, we show that satisfying all constraints of QUD is still challenging for modern LLMs, and that existing evaluation metrics poorly approximate parser quality. Encouragingly, human-authored QUDs are scored highly by our human evaluators, suggesting that there is headroom for further progress on language modeling to improve both QUD parsing and QUD evaluation.", }
Questions Under Discussion (QUD) is a versatile linguistic framework in which discourse progresses as continuously asking questions and answering them. Automatic parsing of a discourse to produce a QUD structure thus entails a complex question generation task: given a document and an answer sentence, generate a question that satisfies linguistic constraints of QUD and can be grounded in an anchor sentence in prior context. These questions are known to be curiosity-driven and open-ended. This work introduces the first framework for the automatic evaluation of QUD parsing, instantiating the theoretical constraints of QUD in a concrete protocol. We present QUDeval, a dataset of fine-grained evaluation of 2,190 QUD questions generated from both fine-tuned systems and LLMs. Using QUDeval, we show that satisfying all constraints of QUD is still challenging for modern LLMs, and that existing evaluation metrics poorly approximate parser quality. Encouragingly, human-authored QUDs are scored highly by our human evaluators, suggesting that there is headroom for further progress on language modeling to improve both QUD parsing and QUD evaluation.
[ "Wu, Yating", "Mangla, Ritika", "Durrett, Greg", "Li, Junyi Jessy" ]
QUDeval: The Evaluation of Questions Under Discussion Discourse Parsing
emnlp-main.325
null
[ "https://github.com/lingchensanwen/qudeval" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.326.bib
https://aclanthology.org/2023.emnlp-main.326/
@inproceedings{yang-etal-2023-prca, title = "{PRCA}: Fitting Black-Box Large Language Models for Retrieval Question Answering via Pluggable Reward-Driven Contextual Adapter", author = "Yang, Haoyan and Li, Zhitao and Zhang, Yong and Wang, Jianzong and Cheng, Ning and Li, Ming and Xiao, Jing", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.326", doi = "10.18653/v1/2023.emnlp-main.326", pages = "5364--5375", abstract = "The Retrieval Question Answering (ReQA) task employs the retrieval-augmented framework, composed of a retriever and generator. The generators formulate the answer based on the documents retrieved by the retriever. Incorporating Large Language Models (LLMs) as generators is beneficial due to their advanced QA capabilities, but they are typically too large to be fine-tuned with budget constraints while some of them are only accessible via APIs. To tackle this issue and further improve ReQA performance, we propose a trainable Pluggable Reward-Driven Contextual Adapter (PRCA), keeping the generator as a black box. Positioned between the retriever and generator in a Pluggable manner, PRCA refines the retrieved information by operating in a token-autoregressive strategy via maximizing rewards of the reinforcement learning phase. Our experiments validate PRCA{'}s effectiveness in enhancing ReQA performance on three datasets by up to 20{\%} improvement to fit black-box LLMs into existing frameworks, demonstrating its considerable potential in the LLMs era.", }
The Retrieval Question Answering (ReQA) task employs the retrieval-augmented framework, composed of a retriever and generator. The generators formulate the answer based on the documents retrieved by the retriever. Incorporating Large Language Models (LLMs) as generators is beneficial due to their advanced QA capabilities, but they are typically too large to be fine-tuned with budget constraints while some of them are only accessible via APIs. To tackle this issue and further improve ReQA performance, we propose a trainable Pluggable Reward-Driven Contextual Adapter (PRCA), keeping the generator as a black box. Positioned between the retriever and generator in a Pluggable manner, PRCA refines the retrieved information by operating in a token-autoregressive strategy via maximizing rewards of the reinforcement learning phase. Our experiments validate PRCA{'}s effectiveness in enhancing ReQA performance on three datasets by up to 20{\%} improvement to fit black-box LLMs into existing frameworks, demonstrating its considerable potential in the LLMs era.
[ "Yang, Haoyan", "Li, Zhitao", "Zhang, Yong", "Wang, Jianzong", "Cheng, Ning", "Li, Ming", "Xiao, Jing" ]
PRCA: Fitting Black-Box Large Language Models for Retrieval Question Answering via Pluggable Reward-Driven Contextual Adapter
emnlp-main.326
2310.18347
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.327.bib
https://aclanthology.org/2023.emnlp-main.327/
@inproceedings{tai-etal-2023-exploring, title = "Exploring Chain of Thought Style Prompting for Text-to-{SQL}", author = "Tai, Chang-Yu and Chen, Ziru and Zhang, Tianshu and Deng, Xiang and Sun, Huan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.327", doi = "10.18653/v1/2023.emnlp-main.327", pages = "5376--5393", abstract = "In-context learning with large language models (LLMs) has recently caught increasing attention due to its superior few-shot performance on various tasks. However, its performance on text-to-SQL parsing still has much room for improvement. In this paper, we hypothesize that a crucial aspect of LLMs to improve for text-to-SQL parsing is their multi-step reasoning ability. Thus, we systematically study how to enhance LLMs{'} reasoning ability through chain of thought (CoT) style prompting, including the original chain-of-thought prompting and least-to-most prompting. Our experiments demonstrate that iterative prompting as in least-to-most prompting may be unnecessary for text-to-SQL parsing, and using detailed reasoning steps tends to have more error propagation issues. Based on these findings, we propose a new CoT-style prompting method for text-to-SQL parsing. It brings 5.2 and 6.5 point absolute gains on the Spider development set and the Spider Realistic set, respectively, compared to the standard prompting method without reasoning steps; 2.4 and 1.5 point absolute gains, compared to the least-to-most prompting method.", }
In-context learning with large language models (LLMs) has recently caught increasing attention due to its superior few-shot performance on various tasks. However, its performance on text-to-SQL parsing still has much room for improvement. In this paper, we hypothesize that a crucial aspect of LLMs to improve for text-to-SQL parsing is their multi-step reasoning ability. Thus, we systematically study how to enhance LLMs{'} reasoning ability through chain of thought (CoT) style prompting, including the original chain-of-thought prompting and least-to-most prompting. Our experiments demonstrate that iterative prompting as in least-to-most prompting may be unnecessary for text-to-SQL parsing, and using detailed reasoning steps tends to have more error propagation issues. Based on these findings, we propose a new CoT-style prompting method for text-to-SQL parsing. It brings 5.2 and 6.5 point absolute gains on the Spider development set and the Spider Realistic set, respectively, compared to the standard prompting method without reasoning steps; 2.4 and 1.5 point absolute gains, compared to the least-to-most prompting method.
[ "Tai, Chang-Yu", "Chen, Ziru", "Zhang, Tianshu", "Deng, Xiang", "Sun, Huan" ]
Exploring Chain of Thought Style Prompting for Text-to-SQL
emnlp-main.327
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.328.bib
https://aclanthology.org/2023.emnlp-main.328/
@inproceedings{butoi-etal-2023-efficient, title = "Efficient Algorithms for Recognizing Weighted Tree-Adjoining Languages", author = "Butoi, Alexandra and Vieira, Tim and Cotterell, Ryan and Chiang, David", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.328", doi = "10.18653/v1/2023.emnlp-main.328", pages = "5394--5416", abstract = "The class of tree-adjoining languages can be characterized by various two-level formalisms, consisting of a context-free grammar (CFG) or pushdown automaton (PDA) controlling another CFG or PDA. These four formalisms are equivalent to tree-adjoining grammars (TAG), linear indexed grammars (LIG), pushdown-adjoining automata (PAA), and embedded pushdown automata (EPDA). We define semiring-weighted versions of the above two-level formalisms, and we design new algorithms for computing their stringsums (the weight of all derivations of a string) and allsums (the weight of all derivations). From these, we also immediately obtain stringsum and allsum algorithms for TAG, LIG, PAA, and EPDA. For LIG, our algorithm is more time-efficient by a factor of $\mathcal{O}(n|\mathcal{N}|)$ (where $n$ is the string length and $|\mathcal{N}|$ is the size of the nonterminal set) and more space-efficient by a factor of $\mathcal{O}(|\Gamma|)$ (where $\Gamma$ is the size of the stack alphabet) than the algorithm of Vijay-Shanker and Weir (1989). For EPDA, our algorithm is both more space-efficient and time-efficient than the algorithm of Alonso et al. (2001) by factors of $\mathcal{O}(|\Gamma|^2)$ and $\mathcal{O}(|\Gamma|^3)$, respectively. Finally, we give the first PAA stringsum and allsum algorithms.", }
The class of tree-adjoining languages can be characterized by various two-level formalisms, consisting of a context-free grammar (CFG) or pushdown automaton (PDA) controlling another CFG or PDA. These four formalisms are equivalent to tree-adjoining grammars (TAG), linear indexed grammars (LIG), pushdown-adjoining automata (PAA), and embedded pushdown automata (EPDA). We define semiring-weighted versions of the above two-level formalisms, and we design new algorithms for computing their stringsums (the weight of all derivations of a string) and allsums (the weight of all derivations). From these, we also immediately obtain stringsum and allsum algorithms for TAG, LIG, PAA, and EPDA. For LIG, our algorithm is more time-efficient by a factor of $\mathcal{O}(n|\mathcal{N}|)$ (where $n$ is the string length and $|\mathcal{N}|$ is the size of the nonterminal set) and more space-efficient by a factor of $\mathcal{O}(|\Gamma|)$ (where $\Gamma$ is the size of the stack alphabet) than the algorithm of Vijay-Shanker and Weir (1989). For EPDA, our algorithm is both more space-efficient and time-efficient than the algorithm of Alonso et al. (2001) by factors of $\mathcal{O}(|\Gamma|^2)$ and $\mathcal{O}(|\Gamma|^3)$, respectively. Finally, we give the first PAA stringsum and allsum algorithms.
[ "Butoi, Alex", "ra", "Vieira, Tim", "Cotterell, Ryan", "Chiang, David" ]
Efficient Algorithms for Recognizing Weighted Tree-Adjoining Languages
emnlp-main.328
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.329.bib
https://aclanthology.org/2023.emnlp-main.329/
@inproceedings{tian-etal-2023-harnessing, title = "Harnessing Black-Box Control to Boost Commonsense in {LM}{'}s Generation", author = "Tian, Yufei and Zhang, Felix and Peng, Nanyun", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.329", doi = "10.18653/v1/2023.emnlp-main.329", pages = "5417--5432", abstract = "Large language models (LLMs) such as GPT-3 have demonstrated a strong capability to generate coherent and contextually relevant text. However, amidst their successes, a crucial issue persists: their generated outputs still lack commonsense at times. Moreover, fine-tuning the entire LLM towards more commonsensical outputs is computationally expensive if not infeasible. In this paper, we present a computation-efficient framework that steers a frozen Pre-Trained Language Model (PTLM) towards more commonsensical generation (i.e., producing a plausible output that incorporates a list of concepts in a meaningful way). Specifically, we first construct a reference-free evaluator that assigns a sentence with a commonsensical score by grounding the sentence to a dynamic commonsense knowledge base from four different relational aspects. We then use the scorer as the oracle for commonsense knowledge, and extend the controllable generation method called NADO to train an auxiliary head that guides a fixed PTLM to better satisfy the oracle. We test our framework on a series of GPT-2-, Flan-T5-, and Alpaca-based language models (LMs) on two constrained concept-to-sentence benchmarks. Human evaluation results demonstrate that our method consistently leads to the most commonsensical outputs.", }
Large language models (LLMs) such as GPT-3 have demonstrated a strong capability to generate coherent and contextually relevant text. However, amidst their successes, a crucial issue persists: their generated outputs still lack commonsense at times. Moreover, fine-tuning the entire LLM towards more commonsensical outputs is computationally expensive if not infeasible. In this paper, we present a computation-efficient framework that steers a frozen Pre-Trained Language Model (PTLM) towards more commonsensical generation (i.e., producing a plausible output that incorporates a list of concepts in a meaningful way). Specifically, we first construct a reference-free evaluator that assigns a sentence with a commonsensical score by grounding the sentence to a dynamic commonsense knowledge base from four different relational aspects. We then use the scorer as the oracle for commonsense knowledge, and extend the controllable generation method called NADO to train an auxiliary head that guides a fixed PTLM to better satisfy the oracle. We test our framework on a series of GPT-2-, Flan-T5-, and Alpaca-based language models (LMs) on two constrained concept-to-sentence benchmarks. Human evaluation results demonstrate that our method consistently leads to the most commonsensical outputs.
[ "Tian, Yufei", "Zhang, Felix", "Peng, Nanyun" ]
Harnessing Black-Box Control to Boost Commonsense in LM's Generation
emnlp-main.329
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.330.bib
https://aclanthology.org/2023.emnlp-main.330/
@inproceedings{tian-etal-2023-just, title = "Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback", author = "Tian, Katherine and Mitchell, Eric and Zhou, Allan and Sharma, Archit and Rafailov, Rafael and Yao, Huaxiu and Finn, Chelsea and Manning, Christopher", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.330", doi = "10.18653/v1/2023.emnlp-main.330", pages = "5433--5442", abstract = "A trustworthy real-world prediction system should produce well-calibrated confidence scores; that is, its confidence in an answer should be indicative of the likelihood that the answer is correct, enabling deferral to an expert in cases of low-confidence predictions. Recent studies have shown that unsupervised pre-training produces large language models (LMs) whose conditional probabilities are remarkably well-calibrated. However, the most widely-used LMs are fine-tuned with reinforcement learning from human feedback (RLHF-LMs), and some studies have suggested that RLHF-LMs produce conditional probabilities that are very poorly calibrated. In light of this perceived weakness, we conduct a broad evaluation of methods for extracting confidence scores from RLHF-LMs. For RLHF-LMs such as ChatGPT, GPT-4, and Claude, we find that verbalized confidences emitted as output tokens are typically better-calibrated than the model{'}s conditional probabilities on the TriviaQA, SciQ, and TruthfulQA benchmarks, often reducing the expected calibration error by a relative 50{\%}.", }
A trustworthy real-world prediction system should produce well-calibrated confidence scores; that is, its confidence in an answer should be indicative of the likelihood that the answer is correct, enabling deferral to an expert in cases of low-confidence predictions. Recent studies have shown that unsupervised pre-training produces large language models (LMs) whose conditional probabilities are remarkably well-calibrated. However, the most widely-used LMs are fine-tuned with reinforcement learning from human feedback (RLHF-LMs), and some studies have suggested that RLHF-LMs produce conditional probabilities that are very poorly calibrated. In light of this perceived weakness, we conduct a broad evaluation of methods for extracting confidence scores from RLHF-LMs. For RLHF-LMs such as ChatGPT, GPT-4, and Claude, we find that verbalized confidences emitted as output tokens are typically better-calibrated than the model{'}s conditional probabilities on the TriviaQA, SciQ, and TruthfulQA benchmarks, often reducing the expected calibration error by a relative 50{\%}.
[ "Tian, Katherine", "Mitchell, Eric", "Zhou, Allan", "Sharma, Archit", "Rafailov, Rafael", "Yao, Huaxiu", "Finn, Chelsea", "Manning, Christopher" ]
Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback
emnlp-main.330
2305.14975
[ "" ]
https://huggingface.co/papers/2305.14975
0
1
0
8
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.331.bib
https://aclanthology.org/2023.emnlp-main.331/
@inproceedings{yang-etal-2023-representative, title = "Representative Demonstration Selection for In-Context Learning with Two-Stage Determinantal Point Process", author = "Yang, Zhao and Zhang, Yuanzhe and Sui, Dianbo and Liu, Cao and Zhao, Jun and Liu, Kang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.331", doi = "10.18653/v1/2023.emnlp-main.331", pages = "5443--5456", abstract = "Although In-Context Learning has proven effective across a broad array of tasks, its efficiency is noticeably influenced by the selection of demonstrations. Existing methods tend to select different demonstrations for each test instance, which is time-consuming and poses limitations in practical scenarios. Therefore, this study aims to address the challenge of selecting a representative subset of in-context demonstrations that can effectively prompt different test instances in a specific task. We propose that this representative subset should be of high quality and diversity. Our empirical analyses confirm that demonstrations that meet these criteria can indeed bolster model performance. To satisfy these criteria, this paper further introduces a two-stage Determinantal Point Process (DPP) method designed to incorporate both quality and diversity in the process of demonstration selection, thereby obtaining representative in-context demonstrations. Through comprehensive experimentation, we have confirmed the efficacy of our proposed method, paving the way for more practical and effective In-Context Learning.", }
Although In-Context Learning has proven effective across a broad array of tasks, its efficiency is noticeably influenced by the selection of demonstrations. Existing methods tend to select different demonstrations for each test instance, which is time-consuming and poses limitations in practical scenarios. Therefore, this study aims to address the challenge of selecting a representative subset of in-context demonstrations that can effectively prompt different test instances in a specific task. We propose that this representative subset should be of high quality and diversity. Our empirical analyses confirm that demonstrations that meet these criteria can indeed bolster model performance. To satisfy these criteria, this paper further introduces a two-stage Determinantal Point Process (DPP) method designed to incorporate both quality and diversity in the process of demonstration selection, thereby obtaining representative in-context demonstrations. Through comprehensive experimentation, we have confirmed the efficacy of our proposed method, paving the way for more practical and effective In-Context Learning.
[ "Yang, Zhao", "Zhang, Yuanzhe", "Sui, Dianbo", "Liu, Cao", "Zhao, Jun", "Liu, Kang" ]
Representative Demonstration Selection for In-Context Learning with Two-Stage Determinantal Point Process
emnlp-main.331
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.332.bib
https://aclanthology.org/2023.emnlp-main.332/
@inproceedings{hagstrom-etal-2023-effect, title = "The Effect of Scaling, Retrieval Augmentation and Form on the Factual Consistency of Language Models", author = {Hagstr{\"o}m, Lovisa and Saynova, Denitsa and Norlund, Tobias and Johansson, Moa and Johansson, Richard}, editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.332", doi = "10.18653/v1/2023.emnlp-main.332", pages = "5457--5476", abstract = "Large Language Models (LLMs) make natural interfaces to factual knowledge, but their usefulness is limited by their tendency to deliver inconsistent answers to semantically equivalent questions. For example, a model might supply the answer {``}Edinburgh{''} to {``}Anne Redpath passed away in X.{''} and {``}London{''} to {``}Anne Redpath{'}s life ended in X.{''} In this work, we identify potential causes of inconsistency and evaluate the effectiveness of two mitigation strategies: up-scaling and augmenting the LM with a passage retrieval database. Our results on the LLaMA and Atlas models show that both strategies reduce inconsistency but that retrieval augmentation is considerably more efficient. We further consider and disentangle the consistency contributions of different components of Atlas. For all LMs evaluated we find that syntactical form and task artifacts impact consistency. Taken together, our results provide a better understanding of the factors affecting the factual consistency of language models.", }
Large Language Models (LLMs) make natural interfaces to factual knowledge, but their usefulness is limited by their tendency to deliver inconsistent answers to semantically equivalent questions. For example, a model might supply the answer {``}Edinburgh{''} to {``}Anne Redpath passed away in X.{''} and {``}London{''} to {``}Anne Redpath{'}s life ended in X.{''} In this work, we identify potential causes of inconsistency and evaluate the effectiveness of two mitigation strategies: up-scaling and augmenting the LM with a passage retrieval database. Our results on the LLaMA and Atlas models show that both strategies reduce inconsistency but that retrieval augmentation is considerably more efficient. We further consider and disentangle the consistency contributions of different components of Atlas. For all LMs evaluated we find that syntactical form and task artifacts impact consistency. Taken together, our results provide a better understanding of the factors affecting the factual consistency of language models.
[ "Hagstr{\\\"o}m, Lovisa", "Saynova, Denitsa", "Norlund, Tobias", "Johansson, Moa", "Johansson, Richard" ]
The Effect of Scaling, Retrieval Augmentation and Form on the Factual Consistency of Language Models
emnlp-main.332
2311.01307
[ "https://github.com/dsaynova/pararel" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.333.bib
https://aclanthology.org/2023.emnlp-main.333/
@inproceedings{shahmohammadi-etal-2023-vipe, title = "{V}i{PE}: Visualise Pretty-much Everything", author = "Shahmohammadi, Hassan and Ghosh, Adhiraj and Lensch, Hendrik", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.333", doi = "10.18653/v1/2023.emnlp-main.333", pages = "5477--5494", abstract = "Figurative and non-literal expressions are profoundly integrated in human communication. Visualising such expressions allow us to convey our creative thoughts, and evoke nuanced emotions. Recent text-to-image models like Stable Diffusion, on the other hand, struggle to depict non-literal expressions. Recent works primarily deal with this issue by compiling humanly annotated datasets on a small scale, which not only demands specialized expertise but also proves highly inefficient. To address this issue, we introduce ViPE: Visualise Pretty-much Everything. ViPE offers a series of lightweight and robust language models that have been trained on a large-scale set of lyrics with noisy visual descriptions that represent their implicit meaning. The synthetic visual descriptions are generated by GPT3.5 relying on neither human annotations nor images. ViPE effectively expresses any arbitrary piece of text into a visualisable description, enabling meaningful and high-quality image generation. We provide compelling evidence that ViPE is more robust than GPT3.5 in synthesising visual elaborations. ViPE also exhibits an understanding of figurative expressions comparable to human experts, providing a powerful and open-source backbone to many downstream applications such as music video and caption generation.", }
Figurative and non-literal expressions are profoundly integrated in human communication. Visualising such expressions allow us to convey our creative thoughts, and evoke nuanced emotions. Recent text-to-image models like Stable Diffusion, on the other hand, struggle to depict non-literal expressions. Recent works primarily deal with this issue by compiling humanly annotated datasets on a small scale, which not only demands specialized expertise but also proves highly inefficient. To address this issue, we introduce ViPE: Visualise Pretty-much Everything. ViPE offers a series of lightweight and robust language models that have been trained on a large-scale set of lyrics with noisy visual descriptions that represent their implicit meaning. The synthetic visual descriptions are generated by GPT3.5 relying on neither human annotations nor images. ViPE effectively expresses any arbitrary piece of text into a visualisable description, enabling meaningful and high-quality image generation. We provide compelling evidence that ViPE is more robust than GPT3.5 in synthesising visual elaborations. ViPE also exhibits an understanding of figurative expressions comparable to human experts, providing a powerful and open-source backbone to many downstream applications such as music video and caption generation.
[ "Shahmohammadi, Hassan", "Ghosh, Adhiraj", "Lensch, Hendrik" ]
ViPE: Visualise Pretty-much Everything
emnlp-main.333
null
[ "https://github.com/hazel1994/vipe" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.334.bib
https://aclanthology.org/2023.emnlp-main.334/
@inproceedings{li-etal-2023-semi, title = "Semi-automatic Data Enhancement for Document-Level Relation Extraction with Distant Supervision from Large Language Models", author = "Li, Junpeng and Jia, Zixia and Zheng, Zilong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.334", doi = "10.18653/v1/2023.emnlp-main.334", pages = "5495--5505", abstract = "Document-level Relation Extraction (DocRE), which aims to extract relations from a long context, is a critical challenge in achieving fine-grained structural comprehension and generating interpretable document representations. Inspired by recent advances in in-context learning capabilities emergent from large language models (LLMs), such as ChatGPT, we aim to design an automated annotation method for DocRE with minimum human effort. Unfortunately, vanilla in-context learning is infeasible for DocRE due to the plenty of predefined fine-grained relation types and the uncontrolled generations of LLMs. To tackle this issue, we propose a method integrating an LLM and a natural language inference (NLI) module to generate relation triples, thereby augmenting document-level relation datasets. We demonstrate the effectiveness of our approach by introducing an enhanced dataset known as DocGNRE, which excels in re-annotating numerous long-tail relation types. We are confident that our method holds the potential for broader applications in domain-specific relation type definitions and offers tangible benefits in advancing generalized language semantic comprehension.", }
Document-level Relation Extraction (DocRE), which aims to extract relations from a long context, is a critical challenge in achieving fine-grained structural comprehension and generating interpretable document representations. Inspired by recent advances in in-context learning capabilities emergent from large language models (LLMs), such as ChatGPT, we aim to design an automated annotation method for DocRE with minimum human effort. Unfortunately, vanilla in-context learning is infeasible for DocRE due to the plenty of predefined fine-grained relation types and the uncontrolled generations of LLMs. To tackle this issue, we propose a method integrating an LLM and a natural language inference (NLI) module to generate relation triples, thereby augmenting document-level relation datasets. We demonstrate the effectiveness of our approach by introducing an enhanced dataset known as DocGNRE, which excels in re-annotating numerous long-tail relation types. We are confident that our method holds the potential for broader applications in domain-specific relation type definitions and offers tangible benefits in advancing generalized language semantic comprehension.
[ "Li, Junpeng", "Jia, Zixia", "Zheng, Zilong" ]
Semi-automatic Data Enhancement for Document-Level Relation Extraction with Distant Supervision from Large Language Models
emnlp-main.334
2311.07314
[ "https://github.com/bigai-nlco/docgnre" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.335.bib
https://aclanthology.org/2023.emnlp-main.335/
@inproceedings{zhou-etal-2023-navigating, title = "Navigating the Grey Area: How Expressions of Uncertainty and Overconfidence Affect Language Models", author = "Zhou, Kaitlyn and Jurafsky, Dan and Hashimoto, Tatsunori", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.335", doi = "10.18653/v1/2023.emnlp-main.335", pages = "5506--5524", abstract = "The increased deployment of LMs for real-world tasks involving knowledge and facts makes it important to understand model epistemology: what LMs think they know, and how their attitudes toward that knowledge are affected by language use in their inputs. Here, we study an aspect of model epistemology: how epistemic markers of certainty, uncertainty, or evidentiality like {``}I{'}m sure it{'}s{''}, {``}I think it{'}s{''}, or {``}Wikipedia says it{'}s{''} affect models, and whether they contribute to model failures. We develop a typology of epistemic markers and inject 50 markers into prompts for question answering. We find that LMs are highly sensitive to epistemic markers in prompts, with accuracies varying more than 80{\%}. Surprisingly, we find that expressions of high certainty result in a 7{\%} decrease in accuracy as compared to low certainty expressions; similarly, factive verbs hurt performance, while evidentials benefit performance. Our analysis of a popular pretraining dataset shows that these markers of uncertainty are associated with answers on question-answering websites, while markers of certainty are associated with questions. These associations may suggest that the behavior of LMs is based on mimicking observed language use, rather than truly reflecting epistemic uncertainty.", }
The increased deployment of LMs for real-world tasks involving knowledge and facts makes it important to understand model epistemology: what LMs think they know, and how their attitudes toward that knowledge are affected by language use in their inputs. Here, we study an aspect of model epistemology: how epistemic markers of certainty, uncertainty, or evidentiality like {``}I{'}m sure it{'}s{''}, {``}I think it{'}s{''}, or {``}Wikipedia says it{'}s{''} affect models, and whether they contribute to model failures. We develop a typology of epistemic markers and inject 50 markers into prompts for question answering. We find that LMs are highly sensitive to epistemic markers in prompts, with accuracies varying more than 80{\%}. Surprisingly, we find that expressions of high certainty result in a 7{\%} decrease in accuracy as compared to low certainty expressions; similarly, factive verbs hurt performance, while evidentials benefit performance. Our analysis of a popular pretraining dataset shows that these markers of uncertainty are associated with answers on question-answering websites, while markers of certainty are associated with questions. These associations may suggest that the behavior of LMs is based on mimicking observed language use, rather than truly reflecting epistemic uncertainty.
[ "Zhou, Kaitlyn", "Jurafsky, Dan", "Hashimoto, Tatsunori" ]
Navigating the Grey Area: How Expressions of Uncertainty and Overconfidence Affect Language Models
emnlp-main.335
2302.13439
[ "" ]
https://huggingface.co/papers/2302.13439
2
0
0
3
[]
[]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.336.bib
https://aclanthology.org/2023.emnlp-main.336/
@inproceedings{wu-etal-2023-elaborative, title = "Elaborative Simplification as Implicit Questions Under Discussion", author = "Wu, Yating and Sheffield, William and Mahowald, Kyle and Li, Junyi Jessy", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.336", doi = "10.18653/v1/2023.emnlp-main.336", pages = "5525--5537", abstract = "Automated text simplification, a technique useful for making text more accessible to people such as children and emergent bilinguals, is often thought of as a monolingual translation task from complex sentences to simplified sentences using encoder-decoder models. This view fails to account for elaborative simplification, where new information is added into the simplified text. This paper proposes to view elaborative simplification through the lens of the Question Under Discussion (QUD) framework, providing a robust way to investigate what writers elaborate upon, how they elaborate, and how elaborations fit into the discourse context by viewing elaborations as explicit answers to implicit questions. We introduce ELABQUD, consisting of 1.3K elaborations accompanied with implicit QUDs, to study these phenomena. We show that explicitly modeling QUD (via question generation) not only provides essential understanding of elaborative simplification and how the elaborations connect with the rest of the discourse, but also substantially improves the quality of elaboration generation.", }
Automated text simplification, a technique useful for making text more accessible to people such as children and emergent bilinguals, is often thought of as a monolingual translation task from complex sentences to simplified sentences using encoder-decoder models. This view fails to account for elaborative simplification, where new information is added into the simplified text. This paper proposes to view elaborative simplification through the lens of the Question Under Discussion (QUD) framework, providing a robust way to investigate what writers elaborate upon, how they elaborate, and how elaborations fit into the discourse context by viewing elaborations as explicit answers to implicit questions. We introduce ELABQUD, consisting of 1.3K elaborations accompanied with implicit QUDs, to study these phenomena. We show that explicitly modeling QUD (via question generation) not only provides essential understanding of elaborative simplification and how the elaborations connect with the rest of the discourse, but also substantially improves the quality of elaboration generation.
[ "Wu, Yating", "Sheffield, William", "Mahowald, Kyle", "Li, Junyi Jessy" ]
Elaborative Simplification as Implicit Questions Under Discussion
emnlp-main.336
2305.10387
[ "" ]
https://huggingface.co/papers/2305.10387
0
0
0
4
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.337.bib
https://aclanthology.org/2023.emnlp-main.337/
@inproceedings{mehra-etal-2023-entsumv2, title = "{E}nt{SUM}v2: Dataset, Models and Evaluation for More Abstractive Entity-Centric Summarization", author = "Mehra, Dhruv and Xie, Lingjue and Hofmann-Coyle, Ella and Kulkarni, Mayank and Preotiuc-Pietro, Daniel", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.337", doi = "10.18653/v1/2023.emnlp-main.337", pages = "5538--5547", abstract = "Entity-centric summarization is a form of controllable summarization that aims to generate a summary for a specific entity given a document. Concise summaries are valuable in various real-life applications, as they enable users to quickly grasp the main points of the document focusing on an entity of interest. This paper presents ENTSUMV2, a more abstractive version of the original entity-centric ENTSUM summarization dataset. In ENTSUMV2 the annotated summaries are intentionally made shorter to benefit more specific and useful entity-centric summaries for downstream users. We conduct extensive experiments on this dataset using multiple abstractive summarization approaches that employ supervised fine-tuning or large-scale instruction tuning. Additionally, we perform comprehensive human evaluation that incorporates metrics for measuring crucial facets. These metrics provide a more fine-grained interpretation of the current state-of-the-art systems and highlight areas for future improvement.", }
Entity-centric summarization is a form of controllable summarization that aims to generate a summary for a specific entity given a document. Concise summaries are valuable in various real-life applications, as they enable users to quickly grasp the main points of the document focusing on an entity of interest. This paper presents ENTSUMV2, a more abstractive version of the original entity-centric ENTSUM summarization dataset. In ENTSUMV2 the annotated summaries are intentionally made shorter to benefit more specific and useful entity-centric summaries for downstream users. We conduct extensive experiments on this dataset using multiple abstractive summarization approaches that employ supervised fine-tuning or large-scale instruction tuning. Additionally, we perform comprehensive human evaluation that incorporates metrics for measuring crucial facets. These metrics provide a more fine-grained interpretation of the current state-of-the-art systems and highlight areas for future improvement.
[ "Mehra, Dhruv", "Xie, Lingjue", "Hofmann-Coyle, Ella", "Kulkarni, Mayank", "Preotiuc-Pietro, Daniel" ]
EntSUMv2: Dataset, Models and Evaluation for More Abstractive Entity-Centric Summarization
emnlp-main.337
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.338.bib
https://aclanthology.org/2023.emnlp-main.338/
@inproceedings{singh-etal-2023-scirepeval, title = "{S}ci{R}ep{E}val: A Multi-Format Benchmark for Scientific Document Representations", author = "Singh, Amanpreet and D{'}Arcy, Mike and Cohan, Arman and Downey, Doug and Feldman, Sergey", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.338", doi = "10.18653/v1/2023.emnlp-main.338", pages = "5548--5566", abstract = "Learned representations of scientific documents can serve as valuable input features for downstream tasks without further fine-tuning. However, existing benchmarks for evaluating these representations fail to capture the diversity of relevant tasks. In response, we introduce SciRepEval, the first comprehensive benchmark for training and evaluating scientific document representations. It includes 24 challenging and realistic tasks, 8 of which are new, across four formats: classification, regression, ranking and search. We then use this benchmark to study and improve the generalization ability of scientific document representation models. We show how state-of-the-art models like SPECTER and SciNCL struggle to generalize across the task formats, and that simple multi-task training fails to improve them. However, a new approach that learns multiple embeddings per document, each tailored to a different format, can improve performance. We experiment with task-format-specific control codes and adapters and find they outperform the existing single-embedding state-of-the-art by over 2 points absolute. We release the resulting family of multi-format models, called SPECTER2, for the community to use and build on.", }
Learned representations of scientific documents can serve as valuable input features for downstream tasks without further fine-tuning. However, existing benchmarks for evaluating these representations fail to capture the diversity of relevant tasks. In response, we introduce SciRepEval, the first comprehensive benchmark for training and evaluating scientific document representations. It includes 24 challenging and realistic tasks, 8 of which are new, across four formats: classification, regression, ranking and search. We then use this benchmark to study and improve the generalization ability of scientific document representation models. We show how state-of-the-art models like SPECTER and SciNCL struggle to generalize across the task formats, and that simple multi-task training fails to improve them. However, a new approach that learns multiple embeddings per document, each tailored to a different format, can improve performance. We experiment with task-format-specific control codes and adapters and find they outperform the existing single-embedding state-of-the-art by over 2 points absolute. We release the resulting family of multi-format models, called SPECTER2, for the community to use and build on.
[ "Singh, Amanpreet", "D{'}Arcy, Mike", "Cohan, Arman", "Downey, Doug", "Feldman, Sergey" ]
SciRepEval: A Multi-Format Benchmark for Scientific Document Representations
emnlp-main.338
null
[ "https://github.com/allenai/specter2" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.339.bib
https://aclanthology.org/2023.emnlp-main.339/
@inproceedings{dhuliawala-etal-2023-diachronic, title = "A Diachronic Perspective on User Trust in {AI} under Uncertainty", author = "Dhuliawala, Shehzaad and Zouhar, Vil{\'e}m and El-Assady, Mennatallah and Sachan, Mrinmaya", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.339", doi = "10.18653/v1/2023.emnlp-main.339", pages = "5567--5580", abstract = "In human-AI collaboration, users typically form a mental model of the AI system, which captures the user{'}s beliefs about when the system performs well and when it does not. The construction of this mental model is guided by both the system{'}s veracity as well as the system output presented to the user e.g., the system{'}s confidence and an explanation for the prediction. However, modern NLP systems are seldom calibrated and are often confidently incorrect about their predictions, which violates users{'} mental model and erodes their trust. In this work, we design a study where users bet on the correctness of an NLP system, and use it to study the evolution of user trust as a response to these trust-eroding events and how the user trust is rebuilt as a function of time after these events. We find that even a few highly inaccurate confidence estimation instances are enough to damage users{'} trust in the system and performance, which does not easily recover over time. We further find that users are more forgiving to the NLP system if it is unconfidently correct rather than confidently incorrect, even though, from a game-theoretic perspective, their payoff is equivalent. Finally, we find that each user can entertain multiple mental models of the system based on the type of the question. These results highlight the importance of confidence calibration in developing user-centered NLP applications to avoid damaging user trust and compromising the collaboration performance.", }
In human-AI collaboration, users typically form a mental model of the AI system, which captures the user{'}s beliefs about when the system performs well and when it does not. The construction of this mental model is guided by both the system{'}s veracity as well as the system output presented to the user e.g., the system{'}s confidence and an explanation for the prediction. However, modern NLP systems are seldom calibrated and are often confidently incorrect about their predictions, which violates users{'} mental model and erodes their trust. In this work, we design a study where users bet on the correctness of an NLP system, and use it to study the evolution of user trust as a response to these trust-eroding events and how the user trust is rebuilt as a function of time after these events. We find that even a few highly inaccurate confidence estimation instances are enough to damage users{'} trust in the system and performance, which does not easily recover over time. We further find that users are more forgiving to the NLP system if it is unconfidently correct rather than confidently incorrect, even though, from a game-theoretic perspective, their payoff is equivalent. Finally, we find that each user can entertain multiple mental models of the system based on the type of the question. These results highlight the importance of confidence calibration in developing user-centered NLP applications to avoid damaging user trust and compromising the collaboration performance.
[ "Dhuliawala, Shehzaad", "Zouhar, Vil{\\'e}m", "El-Assady, Mennatallah", "Sachan, Mrinmaya" ]
A Diachronic Perspective on User Trust in AI under Uncertainty
emnlp-main.339
2310.13544
[ "https://github.com/zouharvi/trust-intervention" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.340.bib
https://aclanthology.org/2023.emnlp-main.340/
@inproceedings{lv-etal-2023-ct, title = "{CT}-{GAT}: Cross-Task Generative Adversarial Attack based on Transferability", author = "Lv, Minxuan and Dai, Chengwei and Li, Kun and Zhou, Wei and Hu, Songlin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.340", doi = "10.18653/v1/2023.emnlp-main.340", pages = "5581--5591", abstract = "Neural network models are vulnerable to adversarial examples, and adversarial transferability further increases the risk of adversarial attacks. Current methods based on transferability often rely on substitute models, which can be impractical and costly in real-world scenarios due to the unavailability of training data and the victim model{'}s structural details. In this paper, we propose a novel approach that directly constructs adversarial examples by extracting transferable features across various tasks. Our key insight is that adversarial transferability can extend across different tasks. Specifically, we train a sequence-to-sequence generative model named CT-GAT (Cross-Task Generative Adversarial Attack) using adversarial sample data collected from multiple tasks to acquire universal adversarial features and generate adversarial examples for different tasks.We conduct experiments on ten distinct datasets, and the results demonstrate that our method achieves superior attack performance with small cost.", }
Neural network models are vulnerable to adversarial examples, and adversarial transferability further increases the risk of adversarial attacks. Current methods based on transferability often rely on substitute models, which can be impractical and costly in real-world scenarios due to the unavailability of training data and the victim model{'}s structural details. In this paper, we propose a novel approach that directly constructs adversarial examples by extracting transferable features across various tasks. Our key insight is that adversarial transferability can extend across different tasks. Specifically, we train a sequence-to-sequence generative model named CT-GAT (Cross-Task Generative Adversarial Attack) using adversarial sample data collected from multiple tasks to acquire universal adversarial features and generate adversarial examples for different tasks.We conduct experiments on ten distinct datasets, and the results demonstrate that our method achieves superior attack performance with small cost.
[ "Lv, Minxuan", "Dai, Chengwei", "Li, Kun", "Zhou, Wei", "Hu, Songlin" ]
CT-GAT: Cross-Task Generative Adversarial Attack based on Transferability
emnlp-main.340
null
[ "https://github.com/xiaoxuannlp/ct-gat" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.341.bib
https://aclanthology.org/2023.emnlp-main.341/
@inproceedings{yu-etal-2023-improving-long, title = "Improving Long Document Topic Segmentation Models With Enhanced Coherence Modeling", author = "Yu, Hai and Deng, Chong and Zhang, Qinglin and Liu, Jiaqing and Chen, Qian and Wang, Wen", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.341", doi = "10.18653/v1/2023.emnlp-main.341", pages = "5592--5605", abstract = "Topic segmentation is critical for obtaining structured documents and improving down- stream tasks such as information retrieval. Due to its ability of automatically exploring clues of topic shift from abundant labeled data, recent supervised neural models have greatly promoted the development of long document topic segmentation, but leaving the deeper relationship between coherence and topic segmentation underexplored. Therefore, this paper enhances the ability of supervised models to capture coherence from both logical structure and semantic similarity perspectives to further improve the topic segmentation performance, proposing Topic-aware Sentence Structure Prediction (TSSP) and Contrastive Semantic Similarity Learning (CSSL). Specifically, the TSSP task is proposed to force the model to comprehend structural information by learning the original relations between adjacent sentences in a disarrayed document, which is constructed by jointly disrupting the original document at topic and sentence levels. Moreover, we utilize inter- and intra-topic information to construct contrastive samples and design the CSSL objective to ensure that the sentences representations in the same topic have higher similarity, while those in different topics are less similar. Extensive experiments show that the Longformer with our approach significantly outperforms old state-of-the-art (SOTA) methods. Our approach improve $F_{1}$ of old SOTA by 3.42 (73.74 $\rightarrow$ 77.16) and reduces $P_{k}$ by 1.11 points (15.0 $\rightarrow$ 13.89) on WIKI-727K and achieves an average relative reduction of 4.3{\%} on $P_{k}$ on WikiSection. The average relative $P_{k}$ drop of 8.38{\%} on two out-of-domain datasets also demonstrates the robustness of our approach.", }
Topic segmentation is critical for obtaining structured documents and improving down- stream tasks such as information retrieval. Due to its ability of automatically exploring clues of topic shift from abundant labeled data, recent supervised neural models have greatly promoted the development of long document topic segmentation, but leaving the deeper relationship between coherence and topic segmentation underexplored. Therefore, this paper enhances the ability of supervised models to capture coherence from both logical structure and semantic similarity perspectives to further improve the topic segmentation performance, proposing Topic-aware Sentence Structure Prediction (TSSP) and Contrastive Semantic Similarity Learning (CSSL). Specifically, the TSSP task is proposed to force the model to comprehend structural information by learning the original relations between adjacent sentences in a disarrayed document, which is constructed by jointly disrupting the original document at topic and sentence levels. Moreover, we utilize inter- and intra-topic information to construct contrastive samples and design the CSSL objective to ensure that the sentences representations in the same topic have higher similarity, while those in different topics are less similar. Extensive experiments show that the Longformer with our approach significantly outperforms old state-of-the-art (SOTA) methods. Our approach improve $F_{1}$ of old SOTA by 3.42 (73.74 $\rightarrow$ 77.16) and reduces $P_{k}$ by 1.11 points (15.0 $\rightarrow$ 13.89) on WIKI-727K and achieves an average relative reduction of 4.3{\%} on $P_{k}$ on WikiSection. The average relative $P_{k}$ drop of 8.38{\%} on two out-of-domain datasets also demonstrates the robustness of our approach.
[ "Yu, Hai", "Deng, Chong", "Zhang, Qinglin", "Liu, Jiaqing", "Chen, Qian", "Wang, Wen" ]
Improving Long Document Topic Segmentation Models With Enhanced Coherence Modeling
emnlp-main.341
2310.11772
[ "https://github.com/alibaba-damo-academy/spokennlp" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.342.bib
https://aclanthology.org/2023.emnlp-main.342/
@inproceedings{chae-etal-2023-dialogue, title = "Dialogue Chain-of-Thought Distillation for Commonsense-aware Conversational Agents", author = "Chae, Hyungjoo and Song, Yongho and Ong, Kai and Kwon, Taeyoon and Kim, Minjin and Yu, Youngjae and Lee, Dongha and Kang, Dongyeop and Yeo, Jinyoung", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.342", doi = "10.18653/v1/2023.emnlp-main.342", pages = "5606--5632", abstract = "Human-like chatbots necessitate the use of commonsense reasoning in order to effectively comprehend and respond to implicit information present within conversations. Achieving such coherence and informativeness in responses, however, is a non-trivial task. Even for large language models (LLMs), the task of identifying and aggregating key evidence within a single hop presents a substantial challenge. This complexity arises because such evidence is scattered across multiple turns in a conversation, thus necessitating integration over multiple hops. Hence, our focus is to facilitate such multi-hop reasoning over a dialogue context, namely dialogue chain-of-thought (CoT) reasoning. To this end, we propose a knowledge distillation framework that leverages LLMs as unreliable teachers and selectively distills consistent and helpful rationales via alignment filters. We further present DOCTOR, a DialOgue Chain-of-ThOught Reasoner that provides reliable CoT rationales for response generation. We conduct extensive experiments to show that enhancing dialogue agents with high-quality rationales from DOCTOR significantly improves the quality of their responses.", }
Human-like chatbots necessitate the use of commonsense reasoning in order to effectively comprehend and respond to implicit information present within conversations. Achieving such coherence and informativeness in responses, however, is a non-trivial task. Even for large language models (LLMs), the task of identifying and aggregating key evidence within a single hop presents a substantial challenge. This complexity arises because such evidence is scattered across multiple turns in a conversation, thus necessitating integration over multiple hops. Hence, our focus is to facilitate such multi-hop reasoning over a dialogue context, namely dialogue chain-of-thought (CoT) reasoning. To this end, we propose a knowledge distillation framework that leverages LLMs as unreliable teachers and selectively distills consistent and helpful rationales via alignment filters. We further present DOCTOR, a DialOgue Chain-of-ThOught Reasoner that provides reliable CoT rationales for response generation. We conduct extensive experiments to show that enhancing dialogue agents with high-quality rationales from DOCTOR significantly improves the quality of their responses.
[ "Chae, Hyungjoo", "Song, Yongho", "Ong, Kai", "Kwon, Taeyoon", "Kim, Minjin", "Yu, Youngjae", "Lee, Dongha", "Kang, Dongyeop", "Yeo, Jinyoung" ]
Dialogue Chain-of-Thought Distillation for Commonsense-aware Conversational Agents
emnlp-main.342
2310.09343
[ "https://github.com/kyle8581/dialoguecot" ]
https://huggingface.co/papers/2310.09343
4
2
0
9
[ "DLI-Lab/DOCTOR" ]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.343.bib
https://aclanthology.org/2023.emnlp-main.343/
@inproceedings{giulianelli-etal-2023-information, title = "Information Value: Measuring Utterance Predictability as Distance from Plausible Alternatives", author = "Giulianelli, Mario and Wallbridge, Sarenne and Fern{\'a}ndez, Raquel", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.343", doi = "10.18653/v1/2023.emnlp-main.343", pages = "5633--5653", abstract = "We present information value, a measure which quantifies the predictability of an utterance relative to a set of plausible alternatives. We introduce a method to obtain interpretable estimates of information value using neural text generators, and exploit their psychometric predictive power to investigate the dimensions of predictability that drive human comprehension behaviour. Information value is a stronger predictor of utterance acceptability in written and spoken dialogue than aggregates of token-level surprisal and it is complementary to surprisal for predicting eye-tracked reading times.", }
We present information value, a measure which quantifies the predictability of an utterance relative to a set of plausible alternatives. We introduce a method to obtain interpretable estimates of information value using neural text generators, and exploit their psychometric predictive power to investigate the dimensions of predictability that drive human comprehension behaviour. Information value is a stronger predictor of utterance acceptability in written and spoken dialogue than aggregates of token-level surprisal and it is complementary to surprisal for predicting eye-tracked reading times.
[ "Giulianelli, Mario", "Wallbridge, Sarenne", "Fern{\\'a}ndez, Raquel" ]
Information Value: Measuring Utterance Predictability as Distance from Plausible Alternatives
emnlp-main.343
2310.13676
[ "https://github.com/dmg-illc/information-value" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.344.bib
https://aclanthology.org/2023.emnlp-main.344/
@inproceedings{miao-etal-2023-generating, title = "Generating Commonsense Counterfactuals for Stable Relation Extraction", author = "Miao, Xin and Li, Yongqi and Qian, Tieyun", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.344", doi = "10.18653/v1/2023.emnlp-main.344", pages = "5654--5668", abstract = "Recent studies on counterfactual augmented data have achieved great success in the coarse-grained natural language processing tasks. However, existing methods encounter two major problems when dealing with the fine-grained relation extraction tasks. One is that they struggle to accurately identify causal terms under the invariant entity constraint. The other is that they ignore the commonsense constraint. To solve these problems, we propose a novel framework to generate commonsense counterfactuals for stable relation extraction. Specifically, to identify causal terms accurately, we introduce an intervention-based strategy and leverage a constituency parser for correction. To satisfy the commonsense constraint, we introduce the concept knowledge base WordNet and design a bottom-up relation expansion algorithm on it to uncover commonsense relations between entities. We conduct a series of comprehensive evaluations, including the low-resource, out-of-domain, and adversarial-attack settings. The results demonstrate that our framework significantly enhances the stability of base relation extraction models.", }
Recent studies on counterfactual augmented data have achieved great success in the coarse-grained natural language processing tasks. However, existing methods encounter two major problems when dealing with the fine-grained relation extraction tasks. One is that they struggle to accurately identify causal terms under the invariant entity constraint. The other is that they ignore the commonsense constraint. To solve these problems, we propose a novel framework to generate commonsense counterfactuals for stable relation extraction. Specifically, to identify causal terms accurately, we introduce an intervention-based strategy and leverage a constituency parser for correction. To satisfy the commonsense constraint, we introduce the concept knowledge base WordNet and design a bottom-up relation expansion algorithm on it to uncover commonsense relations between entities. We conduct a series of comprehensive evaluations, including the low-resource, out-of-domain, and adversarial-attack settings. The results demonstrate that our framework significantly enhances the stability of base relation extraction models.
[ "Miao, Xin", "Li, Yongqi", "Qian, Tieyun" ]
Generating Commonsense Counterfactuals for Stable Relation Extraction
emnlp-main.344
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.345.bib
https://aclanthology.org/2023.emnlp-main.345/
@inproceedings{deshpande-etal-2023-c, title = "{C}-{STS}: Conditional Semantic Textual Similarity", author = "Deshpande, Ameet and Jimenez, Carlos and Chen, Howard and Murahari, Vishvak and Graf, Victoria and Rajpurohit, Tanmay and Kalyan, Ashwin and Chen, Danqi and Narasimhan, Karthik", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.345", doi = "10.18653/v1/2023.emnlp-main.345", pages = "5669--5690", abstract = "Semantic textual similarity (STS) has been a cornerstone task in NLP that measures the degree of similarity between a pair of sentences, with applications in information retrieval, question answering, and embedding methods. However, it is an inherently ambiguous task, with the sentence similarity depending on the specific aspect of interest. We resolve this ambiguity by proposing a novel task called conditional STS (C-STS) which measures similarity conditioned on an aspect elucidated in natural language (hereon, condition). As an example, the similarity between the sentences {``}The NBA player shoots a three-pointer.{''} and {``}A man throws a tennis ball into the air to serve.{''} is higher for the condition {``}The motion of the ball.{''} (both upward) and lower for {``}The size of the ball.{''} (one large and one small). C-STS{'}s advantages are two-fold: (1) it reduces the subjectivity and ambiguity of STS, and (2) enables fine-grained similarity evaluation using diverse conditions. C-STS contains almost 20,000 instances from diverse domains and we evaluate several state-of-the-art models to demonstrate that even the most performant fine-tuning and in-context learning models (GPT-4, Flan, SimCSE) find it challenging, with Spearman correlation scores of {\textless}50. We encourage the community to evaluate their models on C-STS to provide a more holistic view of semantic similarity and natural language understanding.", }
Semantic textual similarity (STS) has been a cornerstone task in NLP that measures the degree of similarity between a pair of sentences, with applications in information retrieval, question answering, and embedding methods. However, it is an inherently ambiguous task, with the sentence similarity depending on the specific aspect of interest. We resolve this ambiguity by proposing a novel task called conditional STS (C-STS) which measures similarity conditioned on an aspect elucidated in natural language (hereon, condition). As an example, the similarity between the sentences {``}The NBA player shoots a three-pointer.{''} and {``}A man throws a tennis ball into the air to serve.{''} is higher for the condition {``}The motion of the ball.{''} (both upward) and lower for {``}The size of the ball.{''} (one large and one small). C-STS{'}s advantages are two-fold: (1) it reduces the subjectivity and ambiguity of STS, and (2) enables fine-grained similarity evaluation using diverse conditions. C-STS contains almost 20,000 instances from diverse domains and we evaluate several state-of-the-art models to demonstrate that even the most performant fine-tuning and in-context learning models (GPT-4, Flan, SimCSE) find it challenging, with Spearman correlation scores of {\textless}50. We encourage the community to evaluate their models on C-STS to provide a more holistic view of semantic similarity and natural language understanding.
[ "Deshp", "e, Ameet", "Jimenez, Carlos", "Chen, Howard", "Murahari, Vishvak", "Graf, Victoria", "Rajpurohit, Tanmay", "Kalyan, Ashwin", "Chen, Danqi", "Narasimhan, Karthik" ]
C-STS: Conditional Semantic Textual Similarity
emnlp-main.345
null
[ "https://github.com/princeton-nlp/c-sts" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.346.bib
https://aclanthology.org/2023.emnlp-main.346/
@inproceedings{goldfarb-tarrant-etal-2023-cross, title = "Cross-lingual Transfer Can Worsen Bias in Sentiment Analysis", author = {Goldfarb-Tarrant, Seraphina and Ross, Bj{\"o}rn and Lopez, Adam}, editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.346", doi = "10.18653/v1/2023.emnlp-main.346", pages = "5691--5704", abstract = "Sentiment analysis (SA) systems are widely deployed in many of the world{'}s languages, and there is well-documented evidence of demographic bias in these systems. In languages beyond English, scarcer training data is often supplemented with transfer learning using pre-trained models, including multilingual models trained on other languages. In some cases, even supervision data comes from other languages. Does cross-lingual transfer also import new biases? To answer this question, we use counterfactual evaluation to test whether gender or racial biases are imported when using cross-lingual transfer, compared to a monolingual transfer setting. Across five languages, we find that systems using cross-lingual transfer usually become more biased than their monolingual counterparts. We also find racial biases to be much more prevalent than gender biases. To spur further research on this topic, we release the sentiment models we used for this study, and the intermediate checkpoints throughout training, yielding 1,525 distinct models; we also release our evaluation code.", }
Sentiment analysis (SA) systems are widely deployed in many of the world{'}s languages, and there is well-documented evidence of demographic bias in these systems. In languages beyond English, scarcer training data is often supplemented with transfer learning using pre-trained models, including multilingual models trained on other languages. In some cases, even supervision data comes from other languages. Does cross-lingual transfer also import new biases? To answer this question, we use counterfactual evaluation to test whether gender or racial biases are imported when using cross-lingual transfer, compared to a monolingual transfer setting. Across five languages, we find that systems using cross-lingual transfer usually become more biased than their monolingual counterparts. We also find racial biases to be much more prevalent than gender biases. To spur further research on this topic, we release the sentiment models we used for this study, and the intermediate checkpoints throughout training, yielding 1,525 distinct models; we also release our evaluation code.
[ "Goldfarb-Tarrant, Seraphina", "Ross, Bj{\\\"o}rn", "Lopez, Adam" ]
Cross-lingual Transfer Can Worsen Bias in Sentiment Analysis
emnlp-main.346
2305.12709
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.347.bib
https://aclanthology.org/2023.emnlp-main.347/
@inproceedings{yang-etal-2023-rumor, title = "Rumor Detection on Social Media with Crowd Intelligence and {C}hat{GPT}-Assisted Networks", author = "Yang, Chang and Zhang, Peng and Qiao, Wenbo and Gao, Hui and Zhao, Jiaming", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.347", doi = "10.18653/v1/2023.emnlp-main.347", pages = "5705--5717", abstract = "In the era of widespread dissemination through social media, the task of rumor detection plays a pivotal role in establishing a trustworthy and reliable information environment. Nonetheless, existing research on rumor detection confronts several challenges: the limited expressive power of text encoding sequences, difficulties in domain knowledge coverage and effective information extraction with knowledge graph-based methods, and insufficient mining of semantic structural information. To address these issues, we propose a Crowd Intelligence and ChatGPT-Assisted Network(CICAN) for rumor classification. Specifically, we present a crowd intelligence-based semantic feature learning module to capture textual content{'}s sequential and hierarchical features. Then, we design a knowledge-based semantic structural mining module that leverages ChatGPT for knowledge enhancement. Finally, we construct an entity-sentence heterogeneous graph and design Entity-Aware Heterogeneous Attention to effectively integrate diverse structural information meta-paths. Experimental results demonstrate that CICAN achieves performance improvement in rumor detection tasks, validating the effectiveness and rationality of using large language models as auxiliary tools.", }
In the era of widespread dissemination through social media, the task of rumor detection plays a pivotal role in establishing a trustworthy and reliable information environment. Nonetheless, existing research on rumor detection confronts several challenges: the limited expressive power of text encoding sequences, difficulties in domain knowledge coverage and effective information extraction with knowledge graph-based methods, and insufficient mining of semantic structural information. To address these issues, we propose a Crowd Intelligence and ChatGPT-Assisted Network(CICAN) for rumor classification. Specifically, we present a crowd intelligence-based semantic feature learning module to capture textual content{'}s sequential and hierarchical features. Then, we design a knowledge-based semantic structural mining module that leverages ChatGPT for knowledge enhancement. Finally, we construct an entity-sentence heterogeneous graph and design Entity-Aware Heterogeneous Attention to effectively integrate diverse structural information meta-paths. Experimental results demonstrate that CICAN achieves performance improvement in rumor detection tasks, validating the effectiveness and rationality of using large language models as auxiliary tools.
[ "Yang, Chang", "Zhang, Peng", "Qiao, Wenbo", "Gao, Hui", "Zhao, Jiaming" ]
Rumor Detection on Social Media with Crowd Intelligence and ChatGPT-Assisted Networks
emnlp-main.347
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.348.bib
https://aclanthology.org/2023.emnlp-main.348/
@inproceedings{zhang-etal-2023-grounding, title = "Grounding Visual Illusions in Language: Do Vision-Language Models Perceive Illusions Like Humans?", author = "Zhang, Yichi and Pan, Jiayi and Zhou, Yuchen and Pan, Rui and Chai, Joyce", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.348", doi = "10.18653/v1/2023.emnlp-main.348", pages = "5718--5728", abstract = "Vision-Language Models (VLMs) are trained on vast amounts of data captured by humans emulating our understanding of the world. However, known as visual illusions, human{'}s perception of reality isn{'}t always faithful to the physical world. This raises a key question: do VLMs have the similar kind of illusions as humans do, or do they faithfully learn to represent reality? To investigate this question, we build a dataset containing five types of visual illusions and formulate four tasks to examine visual illusions in state-of-the-art VLMs. Our findings have shown that although the overall alignment is low, larger models are closer to human perception and more susceptible to visual illusions. Our dataset and initial findings will promote a better understanding of visual illusions in humans and machines and provide a stepping stone for future computational models that can better align humans and machines in perceiving and communicating about the shared visual world. The code and data are available at [github.com/vl-illusion/dataset](https://github.com/vl-illusion/dataset).", }
Vision-Language Models (VLMs) are trained on vast amounts of data captured by humans emulating our understanding of the world. However, known as visual illusions, human{'}s perception of reality isn{'}t always faithful to the physical world. This raises a key question: do VLMs have the similar kind of illusions as humans do, or do they faithfully learn to represent reality? To investigate this question, we build a dataset containing five types of visual illusions and formulate four tasks to examine visual illusions in state-of-the-art VLMs. Our findings have shown that although the overall alignment is low, larger models are closer to human perception and more susceptible to visual illusions. Our dataset and initial findings will promote a better understanding of visual illusions in humans and machines and provide a stepping stone for future computational models that can better align humans and machines in perceiving and communicating about the shared visual world. The code and data are available at [github.com/vl-illusion/dataset](https://github.com/vl-illusion/dataset).
[ "Zhang, Yichi", "Pan, Jiayi", "Zhou, Yuchen", "Pan, Rui", "Chai, Joyce" ]
Grounding Visual Illusions in Language: Do Vision-Language Models Perceive Illusions Like Humans?
emnlp-main.348
2311.00047
[ "https://github.com/vl-illusion/dataset" ]
https://huggingface.co/papers/2311.00047
3
8
1
5
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.349.bib
https://aclanthology.org/2023.emnlp-main.349/
@inproceedings{heppell-etal-2023-analysing, title = "Analysing State-Backed Propaganda Websites: a New Dataset and Linguistic Study", author = "Heppell, Freddy and Bontcheva, Kalina and Scarton, Carolina", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.349", doi = "10.18653/v1/2023.emnlp-main.349", pages = "5729--5741", abstract = "This paper analyses two hitherto unstudied sites sharing state-backed disinformation, Reliable Recent News (rrn.world) and WarOnFakes (waronfakes.com), which publish content in Arabic, Chinese, English, French, German, and Spanish. We describe our content acquisition methodology and perform cross-site unsupervised topic clustering on the resulting multilingual dataset. We also perform linguistic and temporal analysis of the web page translations and topics over time, and investigate articles with false publication dates. We make publicly available this new dataset of 14,053 articles, annotated with each language version, and additional metadata such as links and images. The main contribution of this paper for the NLP community is in the novel dataset which enables studies of disinformation networks, and the training of NLP tools for disinformation detection.", }
This paper analyses two hitherto unstudied sites sharing state-backed disinformation, Reliable Recent News (rrn.world) and WarOnFakes (waronfakes.com), which publish content in Arabic, Chinese, English, French, German, and Spanish. We describe our content acquisition methodology and perform cross-site unsupervised topic clustering on the resulting multilingual dataset. We also perform linguistic and temporal analysis of the web page translations and topics over time, and investigate articles with false publication dates. We make publicly available this new dataset of 14,053 articles, annotated with each language version, and additional metadata such as links and images. The main contribution of this paper for the NLP community is in the novel dataset which enables studies of disinformation networks, and the training of NLP tools for disinformation detection.
[ "Heppell, Freddy", "Bontcheva, Kalina", "Scarton, Carolina" ]
Analysing State-Backed Propaganda Websites: a New Dataset and Linguistic Study
emnlp-main.349
null
[ "https://github.com/gatenlp/wordpress-site-extractor" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.350.bib
https://aclanthology.org/2023.emnlp-main.350/
@inproceedings{zhu-etal-2023-controllable, title = "Controllable Contrastive Generation for Multilingual Biomedical Entity Linking", author = "Zhu, Tiantian and Qin, Yang and Chen, Qingcai and Mu, Xin and Yu, Changlong and Xiang, Yang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.350", doi = "10.18653/v1/2023.emnlp-main.350", pages = "5742--5753", abstract = "Multilingual biomedical entity linking (MBEL) aims to map language-specific mentions in the biomedical text to standardized concepts in a multilingual knowledge base (KB) such as Unified Medical Language System (UMLS). In this paper, we propose Con2GEN, a prompt-based controllable contrastive generation framework for MBEL, which summarizes multidimensional information of the UMLS concept mentioned in biomedical text into a natural sentence following a predefined template. Instead of tackling the MBEL problem with a discriminative classifier, we formulate it as a sequence-to-sequence generation task, which better exploits the shared dependencies between source mentions and target entities. Moreover, Con2GEN matches against UMLS concepts in as many languages and types as possible, hence facilitating cross-information disambiguation. Extensive experiments show that our model achieves promising performance improvements compared with several state-of-the-art techniques on the XL-BEL and the Mantra GSC datasets spanning 12 typologically diverse languages.", }
Multilingual biomedical entity linking (MBEL) aims to map language-specific mentions in the biomedical text to standardized concepts in a multilingual knowledge base (KB) such as Unified Medical Language System (UMLS). In this paper, we propose Con2GEN, a prompt-based controllable contrastive generation framework for MBEL, which summarizes multidimensional information of the UMLS concept mentioned in biomedical text into a natural sentence following a predefined template. Instead of tackling the MBEL problem with a discriminative classifier, we formulate it as a sequence-to-sequence generation task, which better exploits the shared dependencies between source mentions and target entities. Moreover, Con2GEN matches against UMLS concepts in as many languages and types as possible, hence facilitating cross-information disambiguation. Extensive experiments show that our model achieves promising performance improvements compared with several state-of-the-art techniques on the XL-BEL and the Mantra GSC datasets spanning 12 typologically diverse languages.
[ "Zhu, Tiantian", "Qin, Yang", "Chen, Qingcai", "Mu, Xin", "Yu, Changlong", "Xiang, Yang" ]
Controllable Contrastive Generation for Multilingual Biomedical Entity Linking
emnlp-main.350
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.351.bib
https://aclanthology.org/2023.emnlp-main.351/
@inproceedings{do-etal-2023-hyperrouter, title = "{H}yper{R}outer: Towards Efficient Training and Inference of Sparse Mixture of Experts", author = "Do, Truong Giang and Khiem, Le and Pham, Quang and Nguyen, TrungTin and Doan, Thanh-Nam and Nguyen, Binh and Liu, Chenghao and Ramasamy, Savitha and Li, Xiaoli and Hoi, Steven", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.351", doi = "10.18653/v1/2023.emnlp-main.351", pages = "5754--5765", abstract = "By routing input tokens to only a few split experts, Sparse Mixture-of-Experts has enabled efficient training of large language models. Recent findings suggest that fixing the routers can achieve competitive performance by alleviating the collapsing problem, where all experts eventually learn similar representations. However, this strategy has two key limitations: (i) the policy derived from random routers might be sub-optimal, and (ii) it requires extensive resources during training and evaluation, leading to limited efficiency gains. This work introduces HyperRouter, which dynamically generates the router{'}s parameters through a fixed hypernetwork and trainable embeddings to achieve a balance between training the routers and freezing them to learn an improved routing policy. Extensive experiments across a wide range of tasks demonstrate the superior performance and efficiency gains of HyperRouter compared to existing routing methods. Our implementation is publicly available at \url{https://github.com/giangdip2410/HyperRouter}.", }
By routing input tokens to only a few split experts, Sparse Mixture-of-Experts has enabled efficient training of large language models. Recent findings suggest that fixing the routers can achieve competitive performance by alleviating the collapsing problem, where all experts eventually learn similar representations. However, this strategy has two key limitations: (i) the policy derived from random routers might be sub-optimal, and (ii) it requires extensive resources during training and evaluation, leading to limited efficiency gains. This work introduces HyperRouter, which dynamically generates the router{'}s parameters through a fixed hypernetwork and trainable embeddings to achieve a balance between training the routers and freezing them to learn an improved routing policy. Extensive experiments across a wide range of tasks demonstrate the superior performance and efficiency gains of HyperRouter compared to existing routing methods. Our implementation is publicly available at \url{https://github.com/giangdip2410/HyperRouter}.
[ "Do, Truong Giang", "Khiem, Le", "Pham, Quang", "Nguyen, TrungTin", "Doan, Thanh-Nam", "Nguyen, Binh", "Liu, Chenghao", "Ramasamy, Savitha", "Li, Xiaoli", "Hoi, Steven" ]
HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts
emnlp-main.351
null
[ "https://github.com/giangdip2410/hyperrouter" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.352.bib
https://aclanthology.org/2023.emnlp-main.352/
@inproceedings{zhang-yang-2023-mediahg, title = "{M}edia{HG}: Rethinking Eye-catchy Features in Social Media Headline Generation", author = "Zhang, Boning and Yang, Yang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.352", doi = "10.18653/v1/2023.emnlp-main.352", pages = "5766--5777", abstract = "An attractive blog headline on social media platforms can immediately grab readers and trigger more clicks. However, a good headline shall not only contract the main content but also be eye-catchy with domain platform features, which are decided by the website{'}s users and objectives. With effective headlines, bloggers can obtain more site traffic and profits, while readers can have easier access to topics of interest. In this paper, we propose a disentanglement-based headline generation model: MediaHG (Social Media Headline Generation), which can balance the content and contextual features. Specifically, we first devise a sample module for various document views and generate the corresponding headline candidates. Then, we incorporate contrastive learning and auxiliary multi-task to choose the best domain-suitable headline, according to the disentangled budgets. Besides, our separated processing gains more flexible adaptation for other headline generation tasks with special domain features. Our model is built from the content and headlines of 70k hot posts collected from REDBook, a Chinese social media platform for daily sharing. Experimental results with language metrics ROUGE and human evaluation show the improvement in the headline generation task for the platform.", }
An attractive blog headline on social media platforms can immediately grab readers and trigger more clicks. However, a good headline shall not only contract the main content but also be eye-catchy with domain platform features, which are decided by the website{'}s users and objectives. With effective headlines, bloggers can obtain more site traffic and profits, while readers can have easier access to topics of interest. In this paper, we propose a disentanglement-based headline generation model: MediaHG (Social Media Headline Generation), which can balance the content and contextual features. Specifically, we first devise a sample module for various document views and generate the corresponding headline candidates. Then, we incorporate contrastive learning and auxiliary multi-task to choose the best domain-suitable headline, according to the disentangled budgets. Besides, our separated processing gains more flexible adaptation for other headline generation tasks with special domain features. Our model is built from the content and headlines of 70k hot posts collected from REDBook, a Chinese social media platform for daily sharing. Experimental results with language metrics ROUGE and human evaluation show the improvement in the headline generation task for the platform.
[ "Zhang, Boning", "Yang, Yang" ]
MediaHG: Rethinking Eye-catchy Features in Social Media Headline Generation
emnlp-main.352
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.353.bib
https://aclanthology.org/2023.emnlp-main.353/
@inproceedings{xu-etal-2023-fine, title = "Fine-tuned {LLM}s Know More, Hallucinate Less with Few-Shot Sequence-to-Sequence Semantic Parsing over {W}ikidata", author = "Xu, Silei and Liu, Shicheng and Culhane, Theo and Pertseva, Elizaveta and Wu, Meng-Hsi and Semnani, Sina and Lam, Monica", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.353", doi = "10.18653/v1/2023.emnlp-main.353", pages = "5778--5791", abstract = "While large language models (LLMs) can answer many questions correctly, they can also hallucinate and give wrong answers. Wikidata, with its over 12 billion facts, can be used to ground LLMs to improve their factuality. This paper presents WikiWebQuestions, a high-quality question answering benchmark for Wikidata. Ported over from WebQuestions for Freebase, it consists of real-world data with SPARQL annotation. This paper presents a few-shot sequence-to-sequence semantic parser for Wikidata. We modify SPARQL to use the unique domain and property names instead of their IDs. We train the parser to use either the results from an entity linker or mentions in the query. We fine-tune LLaMA by adding the few-shot training data to that used to fine-tune Alpaca. Our experimental results demonstrate the effectiveness of this methodology, establishing a strong baseline of 76{\%} and 65{\%} answer accuracy in the dev and test sets of WikiWebQuestions, respectively. By pairing our semantic parser with GPT-3, we combine verifiable results with qualified GPT-3 guesses to provide useful answers to 96{\%} of the questions in dev. We also show that our method outperforms the state-of-the-art for the QALD-7 Wikidata dataset by 3.6{\%} in F1 score.", }
While large language models (LLMs) can answer many questions correctly, they can also hallucinate and give wrong answers. Wikidata, with its over 12 billion facts, can be used to ground LLMs to improve their factuality. This paper presents WikiWebQuestions, a high-quality question answering benchmark for Wikidata. Ported over from WebQuestions for Freebase, it consists of real-world data with SPARQL annotation. This paper presents a few-shot sequence-to-sequence semantic parser for Wikidata. We modify SPARQL to use the unique domain and property names instead of their IDs. We train the parser to use either the results from an entity linker or mentions in the query. We fine-tune LLaMA by adding the few-shot training data to that used to fine-tune Alpaca. Our experimental results demonstrate the effectiveness of this methodology, establishing a strong baseline of 76{\%} and 65{\%} answer accuracy in the dev and test sets of WikiWebQuestions, respectively. By pairing our semantic parser with GPT-3, we combine verifiable results with qualified GPT-3 guesses to provide useful answers to 96{\%} of the questions in dev. We also show that our method outperforms the state-of-the-art for the QALD-7 Wikidata dataset by 3.6{\%} in F1 score.
[ "Xu, Silei", "Liu, Shicheng", "Culhane, Theo", "Pertseva, Elizaveta", "Wu, Meng-Hsi", "Semnani, Sina", "Lam, Monica" ]
Fine-tuned LLMs Know More, Hallucinate Less with Few-Shot Sequence-to-Sequence Semantic Parsing over Wikidata
emnlp-main.353
2305.14202
[ "https://github.com/stanford-oval/wikidata-emnlp23" ]
https://huggingface.co/papers/2305.14202
2
0
0
7
[ "stanford-oval/llama-7b-wikiwebquestions-qald7", "stanford-oval/llama-7b-wikiwebquestions" ]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.354.bib
https://aclanthology.org/2023.emnlp-main.354/
@inproceedings{mekala-etal-2023-zerotop, title = "{ZEROTOP}: Zero-Shot Task-Oriented Semantic Parsing using Large Language Models", author = "Mekala, Dheeraj and Wolfe, Jason and Roy, Subhro", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.354", doi = "10.18653/v1/2023.emnlp-main.354", pages = "5792--5799", abstract = "We explore the use of large language models (LLMs) for zero-shot semantic parsing. Semantic parsing involves mapping natural language utterances to task-specific meaning representations. LLMs are generally trained on publicly available text and code and cannot be expected to directly generalize to domain-specific parsing tasks in a zero-shot setting. In this work, we propose ZEROTOP, a zero-shot task-oriented parsing method that decomposes semantic parsing problem into a set of abstractive and extractive question-answering (QA) problems. For each utterance, we prompt the LLM with questions corresponding to its top-level intent and a set of slots and use the LLM generations to construct the target meaning representation. We observe that current LLMs fail to detect unanswerable questions; and as a result, cannot handle questions corresponding to missing slots. We address this by fine-tuning a language model on public QA datasets using synthetic negative samples. Experimental results show that our QA-based decomposition paired with the fine-tuned LLM can zero-shot parse $\approx$ 16{\%} of utterances in the MTOP dataset.", }
We explore the use of large language models (LLMs) for zero-shot semantic parsing. Semantic parsing involves mapping natural language utterances to task-specific meaning representations. LLMs are generally trained on publicly available text and code and cannot be expected to directly generalize to domain-specific parsing tasks in a zero-shot setting. In this work, we propose ZEROTOP, a zero-shot task-oriented parsing method that decomposes semantic parsing problem into a set of abstractive and extractive question-answering (QA) problems. For each utterance, we prompt the LLM with questions corresponding to its top-level intent and a set of slots and use the LLM generations to construct the target meaning representation. We observe that current LLMs fail to detect unanswerable questions; and as a result, cannot handle questions corresponding to missing slots. We address this by fine-tuning a language model on public QA datasets using synthetic negative samples. Experimental results show that our QA-based decomposition paired with the fine-tuned LLM can zero-shot parse $\approx$ 16{\%} of utterances in the MTOP dataset.
[ "Mekala, Dheeraj", "Wolfe, Jason", "Roy, Subhro" ]
ZEROTOP: Zero-Shot Task-Oriented Semantic Parsing using Large Language Models
emnlp-main.354
2212.10815
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.355.bib
https://aclanthology.org/2023.emnlp-main.355/
@inproceedings{bout-etal-2023-efficient, title = "Efficient Grammatical Error Correction Via Multi-Task Training and Optimized Training Schedule", author = "Bout, Andrey and Podolskiy, Alexander and Nikolenko, Sergey and Piontkovskaya, Irina", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.355", doi = "10.18653/v1/2023.emnlp-main.355", pages = "5800--5816", abstract = "Progress in neural grammatical error correction (GEC) is hindered by the lack of annotated training data. Sufficient amounts of high-quality manually annotated data are not available, so recent research has relied on generating synthetic data, pretraining on it, and then fine-tuning on real datasets; performance gains have been achieved either by ensembling or by using huge pretrained models such as XXL-T5 as the backbone. In this work, we explore an orthogonal direction: how to use available data more efficiently. First, we propose auxiliary tasks that exploit the alignment between the original and corrected sentences, such as predicting a sequence of corrections. We formulate each task as a sequence-to-sequence problem and perform multi-task training. Second, we discover that the order of datasets used for training and even individual instances within a dataset may have important effects on the final performance, so we set out to find the best training schedule. Together, these two ideas lead to significant improvements, producing results that improve state of the art with much smaller models; in particular, we outperform the best models based on T5-XXL (11B parameters) with a BART-based model (400M parameters).", }
Progress in neural grammatical error correction (GEC) is hindered by the lack of annotated training data. Sufficient amounts of high-quality manually annotated data are not available, so recent research has relied on generating synthetic data, pretraining on it, and then fine-tuning on real datasets; performance gains have been achieved either by ensembling or by using huge pretrained models such as XXL-T5 as the backbone. In this work, we explore an orthogonal direction: how to use available data more efficiently. First, we propose auxiliary tasks that exploit the alignment between the original and corrected sentences, such as predicting a sequence of corrections. We formulate each task as a sequence-to-sequence problem and perform multi-task training. Second, we discover that the order of datasets used for training and even individual instances within a dataset may have important effects on the final performance, so we set out to find the best training schedule. Together, these two ideas lead to significant improvements, producing results that improve state of the art with much smaller models; in particular, we outperform the best models based on T5-XXL (11B parameters) with a BART-based model (400M parameters).
[ "Bout, Andrey", "Podolskiy, Alex", "er", "Nikolenko, Sergey", "Piontkovskaya, Irina" ]
Efficient Grammatical Error Correction Via Multi-Task Training and Optimized Training Schedule
emnlp-main.355
2311.11813
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.356.bib
https://aclanthology.org/2023.emnlp-main.356/
@inproceedings{chen-etal-2023-bla, title = "The {BLA} Benchmark: Investigating Basic Language Abilities of Pre-Trained Multimodal Models", author = "Chen, Xinyi and Fern{\'a}ndez, Raquel and Pezzelle, Sandro", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.356", doi = "10.18653/v1/2023.emnlp-main.356", pages = "5817--5830", abstract = "Despite the impressive performance achieved by pre-trained language-and-vision models in downstream tasks, it remains an open question whether this reflects a proper understanding of image-text interaction. In this work, we explore to what extent they handle basic linguistic constructions{---}active-passive voice, coordination, and relative clauses{---}that even preschool children can typically master. We present BLA, a novel, automatically constructed benchmark to evaluate multimodal models on these Basic Language Abilities. We show that different types of Transformer-based systems, such as CLIP, ViLBERT, and BLIP2, generally struggle with BLA in a zero-shot setting, in line with previous findings. Our experiments, in particular, show that most of the tested models only marginally benefit when fine-tuned or prompted with construction-specific samples. Yet, the generative BLIP2 shows promising trends, especially in an in-context learning setting. This opens the door to using BLA not only as an evaluation benchmark but also to improve models{'} basic language abilities.", }
Despite the impressive performance achieved by pre-trained language-and-vision models in downstream tasks, it remains an open question whether this reflects a proper understanding of image-text interaction. In this work, we explore to what extent they handle basic linguistic constructions{---}active-passive voice, coordination, and relative clauses{---}that even preschool children can typically master. We present BLA, a novel, automatically constructed benchmark to evaluate multimodal models on these Basic Language Abilities. We show that different types of Transformer-based systems, such as CLIP, ViLBERT, and BLIP2, generally struggle with BLA in a zero-shot setting, in line with previous findings. Our experiments, in particular, show that most of the tested models only marginally benefit when fine-tuned or prompted with construction-specific samples. Yet, the generative BLIP2 shows promising trends, especially in an in-context learning setting. This opens the door to using BLA not only as an evaluation benchmark but also to improve models{'} basic language abilities.
[ "Chen, Xinyi", "Fern{\\'a}ndez, Raquel", "Pezzelle, S", "ro" ]
The BLA Benchmark: Investigating Basic Language Abilities of Pre-Trained Multimodal Models
emnlp-main.356
2310.15061
[ "https://github.com/shin-ee-chen/bla" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.357.bib
https://aclanthology.org/2023.emnlp-main.357/
@inproceedings{darrin-etal-2023-rainproof, title = "{R}ain{P}roof: An Umbrella to Shield Text Generator from Out-Of-Distribution Data", author = "Darrin, Maxime and Piantanida, Pablo and Colombo, Pierre", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.357", doi = "10.18653/v1/2023.emnlp-main.357", pages = "5831--5857", abstract = "Implementing effective control mechanisms to ensure the proper functioning and security of deployed NLP models, from translation to chatbots, is essential. A key ingredient to ensure safe system behaviour is Out-Of-Distribution (OOD) detection, which aims to detect whether an input sample is statistically far from the training distribution. Although OOD detection is a widely covered topic in classification tasks, most methods rely on hidden features output by the encoder. In this work, we focus on leveraging soft-probabilities in a black-box framework, i.e. we can access the soft-predictions but not the internal states of the model. Our contributions include: (i) RAINPROOF a Relative informAItioN Projection OOD detection framework; and (ii) a more operational evaluation setting for OOD detection. Surprisingly, we find that OOD detection is not necessarily aligned with task-specific measures. The OOD detector may filter out samples well processed by the model and keep samples that are not, leading to weaker performance. Our results show that RAINPROOF provides OOD detection methods more aligned with task-specific performance metrics than traditional OOD detectors.", }
Implementing effective control mechanisms to ensure the proper functioning and security of deployed NLP models, from translation to chatbots, is essential. A key ingredient to ensure safe system behaviour is Out-Of-Distribution (OOD) detection, which aims to detect whether an input sample is statistically far from the training distribution. Although OOD detection is a widely covered topic in classification tasks, most methods rely on hidden features output by the encoder. In this work, we focus on leveraging soft-probabilities in a black-box framework, i.e. we can access the soft-predictions but not the internal states of the model. Our contributions include: (i) RAINPROOF a Relative informAItioN Projection OOD detection framework; and (ii) a more operational evaluation setting for OOD detection. Surprisingly, we find that OOD detection is not necessarily aligned with task-specific measures. The OOD detector may filter out samples well processed by the model and keep samples that are not, leading to weaker performance. Our results show that RAINPROOF provides OOD detection methods more aligned with task-specific performance metrics than traditional OOD detectors.
[ "Darrin, Maxime", "Piantanida, Pablo", "Colombo, Pierre" ]
RainProof: An Umbrella to Shield Text Generator from Out-Of-Distribution Data
emnlp-main.357
2212.09171
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.358.bib
https://aclanthology.org/2023.emnlp-main.358/
@inproceedings{ma-etal-2023-kepl, title = "{KEPL}: Knowledge Enhanced Prompt Learning for {C}hinese Hypernym-Hyponym Extraction", author = "Ma, Ningchen and Wang, Dong and Bao, Hongyun and He, Lei and Zheng, Suncong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.358", doi = "10.18653/v1/2023.emnlp-main.358", pages = "5858--5867", abstract = "Modeling hypernym-hyponym ({``}is-a{''}) relations is very important for many natural language processing (NLP) tasks, such as classification, natural language inference and relation extraction. Existing work on is-a relation extraction is mostly in the English language environment. Due to the flexibility of language expression and the lack of high-quality Chinese annotation datasets, it is still a challenge to accurately identify such relations from Chinese unstructured texts. To tackle this problem, we propose a Knowledge Enhanced Prompt Learning (KEPL) method for Chinese hypernym-hyponym relation extraction. Our model uses the Hearst-like patterns as the prior knowledge. By exploiting a Dynamic Adaptor Architecture to select the matching pattern for the text into prompt, our model embeds patterns and text simultaneously. Additionally, we construct a Chinese hypernym-hyponym relation extraction dataset, which contains three typical scenarios, as baike, news and We-media. The experimental results on the dataset demonstrate the efficiency and effectiveness of our proposed model.", }
Modeling hypernym-hyponym ({``}is-a{''}) relations is very important for many natural language processing (NLP) tasks, such as classification, natural language inference and relation extraction. Existing work on is-a relation extraction is mostly in the English language environment. Due to the flexibility of language expression and the lack of high-quality Chinese annotation datasets, it is still a challenge to accurately identify such relations from Chinese unstructured texts. To tackle this problem, we propose a Knowledge Enhanced Prompt Learning (KEPL) method for Chinese hypernym-hyponym relation extraction. Our model uses the Hearst-like patterns as the prior knowledge. By exploiting a Dynamic Adaptor Architecture to select the matching pattern for the text into prompt, our model embeds patterns and text simultaneously. Additionally, we construct a Chinese hypernym-hyponym relation extraction dataset, which contains three typical scenarios, as baike, news and We-media. The experimental results on the dataset demonstrate the efficiency and effectiveness of our proposed model.
[ "Ma, Ningchen", "Wang, Dong", "Bao, Hongyun", "He, Lei", "Zheng, Suncong" ]
KEPL: Knowledge Enhanced Prompt Learning for Chinese Hypernym-Hyponym Extraction
emnlp-main.358
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.359.bib
https://aclanthology.org/2023.emnlp-main.359/
@inproceedings{chen-etal-2023-ditto, title = "Ditto: A Simple and Efficient Approach to Improve Sentence Embeddings", author = "Chen, Qian and Wang, Wen and Zhang, Qinglin and Zheng, Siqi and Deng, Chong and Yu, Hai and Liu, Jiaqing and Ma, Yukun and Zhang, Chong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.359", doi = "10.18653/v1/2023.emnlp-main.359", pages = "5868--5875", abstract = "Prior studies diagnose the anisotropy problem in sentence representations from pre-trained language models, e.g., BERT, without fine-tuning. Our analysis reveals that the sentence embeddings from BERT suffer from a bias towards uninformative words, limiting the performance in semantic textual similarity (STS) tasks. To address this bias, we propose a simple and efficient unsupervised approach, Diagonal Attention Pooling (Ditto), which weights words with model-based importance estimations and computes the weighted average of word representations from pre-trained models as sentence embeddings. Ditto can be easily applied to any pre-trained language model as a postprocessing operation. Compared to prior sentence embedding approaches, Ditto does not add parameters nor requires any learning. Empirical evaluations demonstrate that our proposed Ditto can alleviate the anisotropy problem and improve various pre-trained models on the STS benchmarks.", }
Prior studies diagnose the anisotropy problem in sentence representations from pre-trained language models, e.g., BERT, without fine-tuning. Our analysis reveals that the sentence embeddings from BERT suffer from a bias towards uninformative words, limiting the performance in semantic textual similarity (STS) tasks. To address this bias, we propose a simple and efficient unsupervised approach, Diagonal Attention Pooling (Ditto), which weights words with model-based importance estimations and computes the weighted average of word representations from pre-trained models as sentence embeddings. Ditto can be easily applied to any pre-trained language model as a postprocessing operation. Compared to prior sentence embedding approaches, Ditto does not add parameters nor requires any learning. Empirical evaluations demonstrate that our proposed Ditto can alleviate the anisotropy problem and improve various pre-trained models on the STS benchmarks.
[ "Chen, Qian", "Wang, Wen", "Zhang, Qinglin", "Zheng, Siqi", "Deng, Chong", "Yu, Hai", "Liu, Jiaqing", "Ma, Yukun", "Zhang, Chong" ]
Ditto: A Simple and Efficient Approach to Improve Sentence Embeddings
emnlp-main.359
2305.10786
[ "https://github.com/alibaba-damo-academy/spokennlp" ]
https://huggingface.co/papers/2305.10786
0
0
0
9
[]
[]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.360.bib
https://aclanthology.org/2023.emnlp-main.360/
@inproceedings{qi-etal-2023-preserving, title = "Preserving Knowledge Invariance: Rethinking Robustness Evaluation of Open Information Extraction", author = "Qi, Ji and Zhang, Chuchun and Wang, Xiaozhi and Zeng, Kaisheng and Yu, Jifan and Liu, Jinxin and Hou, Lei and Li, Juanzi and Bin, Xu", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.360", doi = "10.18653/v1/2023.emnlp-main.360", pages = "5876--5890", abstract = "The robustness to distribution changes ensures that NLP models can be successfully applied in the realistic world, especially for information extraction tasks. However, most prior evaluation benchmarks have been devoted to validating pairwise matching correctness, ignoring the crucial validation of robustness. In this paper, we present the first benchmark that simulates the evaluation of open information extraction models in the real world, where the syntactic and expressive distributions under the same knowledge meaning may drift variously. We design and annotate a large-scale testbed in which each example is a knowledge-invariant clique that consists of sentences with structured knowledge of the same meaning but with different syntactic and expressive forms. By further elaborating the robustness metric, a model is judged to be robust if its performance is consistently accurate on the overall cliques. We perform experiments on typical models published in the last decade as well as a representative large language model, and the results show that the existing successful models exhibit a frustrating degradation, with a maximum drop of 23.43 $F_1$ score. Our resources and code will be publicly available.", }
The robustness to distribution changes ensures that NLP models can be successfully applied in the realistic world, especially for information extraction tasks. However, most prior evaluation benchmarks have been devoted to validating pairwise matching correctness, ignoring the crucial validation of robustness. In this paper, we present the first benchmark that simulates the evaluation of open information extraction models in the real world, where the syntactic and expressive distributions under the same knowledge meaning may drift variously. We design and annotate a large-scale testbed in which each example is a knowledge-invariant clique that consists of sentences with structured knowledge of the same meaning but with different syntactic and expressive forms. By further elaborating the robustness metric, a model is judged to be robust if its performance is consistently accurate on the overall cliques. We perform experiments on typical models published in the last decade as well as a representative large language model, and the results show that the existing successful models exhibit a frustrating degradation, with a maximum drop of 23.43 $F_1$ score. Our resources and code will be publicly available.
[ "Qi, Ji", "Zhang, Chuchun", "Wang, Xiaozhi", "Zeng, Kaisheng", "Yu, Jifan", "Liu, Jinxin", "Hou, Lei", "Li, Juanzi", "Bin, Xu" ]
Preserving Knowledge Invariance: Rethinking Robustness Evaluation of Open Information Extraction
emnlp-main.360
2305.13981
[ "https://github.com/qijimrc/robust" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.361.bib
https://aclanthology.org/2023.emnlp-main.361/
@inproceedings{kaffee-etal-2023-article, title = "Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual {W}ikipedia Editor Discussions", author = "Kaffee, Lucie-Aim{\'e}e and Arora, Arnav and Augenstein, Isabelle", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.361", doi = "10.18653/v1/2023.emnlp-main.361", pages = "5891--5909", abstract = "The moderation of content on online platforms is usually non-transparent. On Wikipedia, however, this discussion is carried out publicly and editors are encouraged to use the content moderation policies as explanations for making moderation decisions. Currently, only a few comments explicitly mention those policies {--} 20{\%} of the English ones, but as few as 2{\%} of the German and Turkish comments. To aid in this process of understanding how content is moderated, we construct a novel multilingual dataset of Wikipedia editor discussions along with their reasoning in three languages. The dataset contains the stances of the editors (keep, delete, merge, comment), along with the stated reason, and a content moderation policy, for each edit decision. We demonstrate that stance and corresponding reason (policy) can be predicted jointly with a high degree of accuracy, adding transparency to the decision-making process. We release both our joint prediction models and the multilingual content moderation dataset for further research on automated transparent content moderation.", }
The moderation of content on online platforms is usually non-transparent. On Wikipedia, however, this discussion is carried out publicly and editors are encouraged to use the content moderation policies as explanations for making moderation decisions. Currently, only a few comments explicitly mention those policies {--} 20{\%} of the English ones, but as few as 2{\%} of the German and Turkish comments. To aid in this process of understanding how content is moderated, we construct a novel multilingual dataset of Wikipedia editor discussions along with their reasoning in three languages. The dataset contains the stances of the editors (keep, delete, merge, comment), along with the stated reason, and a content moderation policy, for each edit decision. We demonstrate that stance and corresponding reason (policy) can be predicted jointly with a high degree of accuracy, adding transparency to the decision-making process. We release both our joint prediction models and the multilingual content moderation dataset for further research on automated transparent content moderation.
[ "Kaffee, Lucie-Aim{\\'e}e", "Arora, Arnav", "Augenstein, Isabelle" ]
Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual Wikipedia Editor Discussions
emnlp-main.361
2310.05779
[ "https://github.com/copenlu/wiki-stance" ]
https://huggingface.co/papers/2310.05779
0
0
0
3
[]
[ "copenlu/wiki-stance" ]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.362.bib
https://aclanthology.org/2023.emnlp-main.362/
@inproceedings{bae-etal-2023-fast, title = "Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding", author = "Bae, Sangmin and Ko, Jongwoo and Song, Hwanjun and Yun, Se-Young", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.362", doi = "10.18653/v1/2023.emnlp-main.362", pages = "5910--5924", abstract = "To tackle the high inference latency exhibited by autoregressive language models, previous studies have proposed an early-exiting framework that allocates adaptive computation paths for each token based on the complexity of generating the subsequent token. However, we observed several shortcomings, including performance degradation caused by a state copying mechanism or numerous exit paths, and sensitivity to exit confidence thresholds. Consequently, we propose a Fast and Robust Early-Exiting (FREE) framework, which incorporates a shallow-deep module and a synchronized parallel decoding. Our framework enables faster inference by synchronizing the decoding process of the current token with previously stacked early-exited tokens. Furthermore, as parallel decoding allows us to observe predictions from both shallow and deep models, we present a novel adaptive threshold estimator that exploits a Beta mixture model to determine suitable confidence thresholds. We empirically demonstrated the superiority of our proposed framework on extensive generation tasks.", }
To tackle the high inference latency exhibited by autoregressive language models, previous studies have proposed an early-exiting framework that allocates adaptive computation paths for each token based on the complexity of generating the subsequent token. However, we observed several shortcomings, including performance degradation caused by a state copying mechanism or numerous exit paths, and sensitivity to exit confidence thresholds. Consequently, we propose a Fast and Robust Early-Exiting (FREE) framework, which incorporates a shallow-deep module and a synchronized parallel decoding. Our framework enables faster inference by synchronizing the decoding process of the current token with previously stacked early-exited tokens. Furthermore, as parallel decoding allows us to observe predictions from both shallow and deep models, we present a novel adaptive threshold estimator that exploits a Beta mixture model to determine suitable confidence thresholds. We empirically demonstrated the superiority of our proposed framework on extensive generation tasks.
[ "Bae, Sangmin", "Ko, Jongwoo", "Song, Hwanjun", "Yun, Se-Young" ]
Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding
emnlp-main.362
2310.05424
[ "https://github.com/raymin0223/fast_robust_early_exit" ]
https://huggingface.co/papers/2310.05424
0
1
0
4
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.363.bib
https://aclanthology.org/2023.emnlp-main.363/
@inproceedings{qin-etal-2023-end, title = "End-to-end Task-oriented Dialogue: A Survey of Tasks, Methods, and Future Directions", author = "Qin, Libo and Pan, Wenbo and Chen, Qiguang and Liao, Lizi and Yu, Zhou and Zhang, Yue and Che, Wanxiang and Li, Min", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.363", doi = "10.18653/v1/2023.emnlp-main.363", pages = "5925--5941", abstract = "End-to-end task-oriented dialogue (EToD) can directly generate responses in an end-to-end fashion without modular training, which attracts escalating popularity. The advancement of deep neural networks, especially the successful use of large pre-trained models, has further led to significant progress in EToD research in recent years. In this paper, we present a thorough review and provide a unified perspective to summarize existing approaches as well as recent trends to advance the development of EToD research. The contributions of this paper can be summarized: (1) First survey: to our knowledge, we take the first step to present a thorough survey of this research field; (2) New taxonomy: we first introduce a unified perspective for EToD, including (i) Modularly EToD and (ii) Fully EToD; (3) New Frontiers: we discuss some potential frontier areas as well as the corresponding challenges, hoping to spur breakthrough research in EToD field; (4) Abundant resources: we build a public website, where EToD researchers could directly access the recent progress. We hope this work can serve as a thorough reference for the EToD research community.", }
End-to-end task-oriented dialogue (EToD) can directly generate responses in an end-to-end fashion without modular training, which attracts escalating popularity. The advancement of deep neural networks, especially the successful use of large pre-trained models, has further led to significant progress in EToD research in recent years. In this paper, we present a thorough review and provide a unified perspective to summarize existing approaches as well as recent trends to advance the development of EToD research. The contributions of this paper can be summarized: (1) First survey: to our knowledge, we take the first step to present a thorough survey of this research field; (2) New taxonomy: we first introduce a unified perspective for EToD, including (i) Modularly EToD and (ii) Fully EToD; (3) New Frontiers: we discuss some potential frontier areas as well as the corresponding challenges, hoping to spur breakthrough research in EToD field; (4) Abundant resources: we build a public website, where EToD researchers could directly access the recent progress. We hope this work can serve as a thorough reference for the EToD research community.
[ "Qin, Libo", "Pan, Wenbo", "Chen, Qiguang", "Liao, Lizi", "Yu, Zhou", "Zhang, Yue", "Che, Wanxiang", "Li, Min" ]
End-to-end Task-oriented Dialogue: A Survey of Tasks, Methods, and Future Directions
emnlp-main.363
2311.09008
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.364.bib
https://aclanthology.org/2023.emnlp-main.364/
@inproceedings{yoran-etal-2023-answering, title = "Answering Questions by Meta-Reasoning over Multiple Chains of Thought", author = "Yoran, Ori and Wolfson, Tomer and Bogin, Ben and Katz, Uri and Deutch, Daniel and Berant, Jonathan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.364", doi = "10.18653/v1/2023.emnlp-main.364", pages = "5942--5966", abstract = "Modern systems for multi-hop question answering (QA) typically break questions into a sequence of reasoning steps, termed chain-of-thought (CoT), before arriving at a final answer. Often, multiple chains are sampled and aggregated through a voting mechanism over the final answers, but the intermediate steps themselves are discarded. While such approaches improve performance, they do not consider the relations between intermediate steps across chains and do not provide a unified explanation for the predicted answer. We introduce Multi-Chain Reasoning (MCR), an approach which prompts large language models to meta-reason over multiple chains of thought, rather than aggregate their answers. MCR examines different reasoning chains, mixes information between them and selects the most relevant facts in generating an explanation and predicting the answer. MCR outperforms strong baselines on 7 multi-hop QA datasets. Moreover, our analysis reveals that MCR explanations exhibit high quality, enabling humans to verify its answers.", }
Modern systems for multi-hop question answering (QA) typically break questions into a sequence of reasoning steps, termed chain-of-thought (CoT), before arriving at a final answer. Often, multiple chains are sampled and aggregated through a voting mechanism over the final answers, but the intermediate steps themselves are discarded. While such approaches improve performance, they do not consider the relations between intermediate steps across chains and do not provide a unified explanation for the predicted answer. We introduce Multi-Chain Reasoning (MCR), an approach which prompts large language models to meta-reason over multiple chains of thought, rather than aggregate their answers. MCR examines different reasoning chains, mixes information between them and selects the most relevant facts in generating an explanation and predicting the answer. MCR outperforms strong baselines on 7 multi-hop QA datasets. Moreover, our analysis reveals that MCR explanations exhibit high quality, enabling humans to verify its answers.
[ "Yoran, Ori", "Wolfson, Tomer", "Bogin, Ben", "Katz, Uri", "Deutch, Daniel", "Berant, Jonathan" ]
Answering Questions by Meta-Reasoning over Multiple Chains of Thought
emnlp-main.364
2304.13007
[ "https://github.com/oriyor/reasoning-on-cots" ]
https://huggingface.co/papers/2304.13007
2
1
0
6
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.365.bib
https://aclanthology.org/2023.emnlp-main.365/
@inproceedings{xu-etal-2023-instructscore, title = "{INSTRUCTSCORE}: Towards Explainable Text Generation Evaluation with Automatic Feedback", author = "Xu, Wenda and Wang, Danqing and Pan, Liangming and Song, Zhenqiao and Freitag, Markus and Wang, William and Li, Lei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.365", doi = "10.18653/v1/2023.emnlp-main.365", pages = "5967--5994", abstract = "Automatically evaluating the quality of language generation is critical. Although recent learned metrics show high correlation with human judgement, these metrics do not provide explicit explanation of their verdict, nor associate the scores with defects in the generated text. To address this limitation, we present INSTRUCTSCORE, a fine-grained explainable evaluation metric for text generation. By harnessing both explicit human instruction and the implicit knowledge of GPT-4, we fine-tune a text evaluation metric based on LLaMA, producing both a score for generated text and a human readable diagnostic report. We evaluate INSTRUCTSCORE on a variety of generation tasks, including translation, captioning, data-to-text, and commonsense generation. Experiments show that our 7B model surpasses all other unsupervised metrics, including those based on 175B GPT-3 and GPT-4. Surprisingly, our INSTRUCTSCORE, even without direct supervision from human-rated data, achieves performance levels on par with state-of-the-art metrics like COMET22, which were fine-tuned on human ratings.", }
Automatically evaluating the quality of language generation is critical. Although recent learned metrics show high correlation with human judgement, these metrics do not provide explicit explanation of their verdict, nor associate the scores with defects in the generated text. To address this limitation, we present INSTRUCTSCORE, a fine-grained explainable evaluation metric for text generation. By harnessing both explicit human instruction and the implicit knowledge of GPT-4, we fine-tune a text evaluation metric based on LLaMA, producing both a score for generated text and a human readable diagnostic report. We evaluate INSTRUCTSCORE on a variety of generation tasks, including translation, captioning, data-to-text, and commonsense generation. Experiments show that our 7B model surpasses all other unsupervised metrics, including those based on 175B GPT-3 and GPT-4. Surprisingly, our INSTRUCTSCORE, even without direct supervision from human-rated data, achieves performance levels on par with state-of-the-art metrics like COMET22, which were fine-tuned on human ratings.
[ "Xu, Wenda", "Wang, Danqing", "Pan, Liangming", "Song, Zhenqiao", "Freitag, Markus", "Wang, William", "Li, Lei" ]
INSTRUCTSCORE: Towards Explainable Text Generation Evaluation with Automatic Feedback
emnlp-main.365
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.366.bib
https://aclanthology.org/2023.emnlp-main.366/
@inproceedings{li-etal-2023-multi-level, title = "Multi-level Contrastive Learning for Script-based Character Understanding", author = "Li, Dawei and Zhang, Hengyuan and Li, Yanran and Yang, Shiping", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.366", doi = "10.18653/v1/2023.emnlp-main.366", pages = "5995--6013", abstract = "In this work, we tackle the scenario of understanding characters in scripts, which aims to learn the characters{'} personalities and identities from their utterances. We begin by analyzing several challenges in this scenario, and then propose a multi-level contrastive learning framework to capture characters{'} global information in a fine-grained manner. To validate the proposed framework, we conduct extensive experiments on three character understanding sub-tasks by comparing with strong pre-trained language models, including SpanBERT, Longformer, BigBird and ChatGPT-3.5. Experimental results demonstrate that our method improves the performances by a considerable margin. Through further in-depth analysis, we show the effectiveness of our method in addressing the challenges and provide more hints on the scenario of character understanding. We will open-source our work in this URL.", }
In this work, we tackle the scenario of understanding characters in scripts, which aims to learn the characters{'} personalities and identities from their utterances. We begin by analyzing several challenges in this scenario, and then propose a multi-level contrastive learning framework to capture characters{'} global information in a fine-grained manner. To validate the proposed framework, we conduct extensive experiments on three character understanding sub-tasks by comparing with strong pre-trained language models, including SpanBERT, Longformer, BigBird and ChatGPT-3.5. Experimental results demonstrate that our method improves the performances by a considerable margin. Through further in-depth analysis, we show the effectiveness of our method in addressing the challenges and provide more hints on the scenario of character understanding. We will open-source our work in this URL.
[ "Li, Dawei", "Zhang, Hengyuan", "Li, Yanran", "Yang, Shiping" ]
Multi-level Contrastive Learning for Script-based Character Understanding
emnlp-main.366
2310.13231
[ "https://github.com/david-li0406/script-based-character-understanding" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.367.bib
https://aclanthology.org/2023.emnlp-main.367/
@inproceedings{seo-etal-2023-chef, title = "{CHEF} in the Language Kitchen: A Generative Data Augmentation Leveraging {K}orean Morpheme Ingredients", author = "Seo, Jaehyung and Moon, Hyeonseok and Lee, Jaewook and Eo, Sugyeong and Park, Chanjun and Lim, Heuiseok", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.367", doi = "10.18653/v1/2023.emnlp-main.367", pages = "6014--6029", abstract = "Korean morphological variations present unique opportunities and challenges in natural language processing (NLP), necessitating an advanced understanding of morpheme-based sentence construction. The complexity of morphological variations allows for diverse sentence forms based on the syntactic-semantic integration of functional morphemes (i.e., affixes) to lexical morphemes (i.e., roots). With this in mind, we propose a method - CHEF, replicating the morphological transformations inherent in sentences based on lexical and functional morpheme combinations through generative data augmentation. CHEF operates using a morpheme blender and a label discriminator, thereby enhancing the diversity of Korean sentence forms by capturing the properties of agglutination while maintaining label consistency. We conduct experiments on Korean multiple classification datasets, improving model performance in full- and few-shot settings. Our proposed method boosts performance beyond the preceding data augmentation methods without incurring external data usage. We demonstrate that our approach achieves comparable results yielded by augmentation techniques that use large language models (LLMs).", }
Korean morphological variations present unique opportunities and challenges in natural language processing (NLP), necessitating an advanced understanding of morpheme-based sentence construction. The complexity of morphological variations allows for diverse sentence forms based on the syntactic-semantic integration of functional morphemes (i.e., affixes) to lexical morphemes (i.e., roots). With this in mind, we propose a method - CHEF, replicating the morphological transformations inherent in sentences based on lexical and functional morpheme combinations through generative data augmentation. CHEF operates using a morpheme blender and a label discriminator, thereby enhancing the diversity of Korean sentence forms by capturing the properties of agglutination while maintaining label consistency. We conduct experiments on Korean multiple classification datasets, improving model performance in full- and few-shot settings. Our proposed method boosts performance beyond the preceding data augmentation methods without incurring external data usage. We demonstrate that our approach achieves comparable results yielded by augmentation techniques that use large language models (LLMs).
[ "Seo, Jaehyung", "Moon, Hyeonseok", "Lee, Jaewook", "Eo, Sugyeong", "Park, Chanjun", "Lim, Heuiseok" ]
CHEF in the Language Kitchen: A Generative Data Augmentation Leveraging Korean Morpheme Ingredients
emnlp-main.367
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.368.bib
https://aclanthology.org/2023.emnlp-main.368/
@inproceedings{ruiz-dolz-etal-2023-automatic, title = "Automatic Debate Evaluation with Argumentation Semantics and Natural Language Argument Graph Networks", author = "Ruiz-Dolz, Ramon and Heras, Stella and Garcia, Ana", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.368", doi = "10.18653/v1/2023.emnlp-main.368", pages = "6030--6040", abstract = "The lack of annotated data on professional argumentation and complete argumentative debates has led to the oversimplification and the inability of approaching more complex natural language processing tasks. Such is the case of the automatic evaluation of complete professional argumentative debates. In this paper, we propose an original hybrid method to automatically predict the winning stance in this kind of debates. For that purpose, we combine concepts from argumentation theory such as argumentation frameworks and semantics, with Transformer-based architectures and neural graph networks. Furthermore, we obtain promising results that lay the basis on an unexplored new instance of the automatic analysis of natural language arguments.", }
The lack of annotated data on professional argumentation and complete argumentative debates has led to the oversimplification and the inability of approaching more complex natural language processing tasks. Such is the case of the automatic evaluation of complete professional argumentative debates. In this paper, we propose an original hybrid method to automatically predict the winning stance in this kind of debates. For that purpose, we combine concepts from argumentation theory such as argumentation frameworks and semantics, with Transformer-based architectures and neural graph networks. Furthermore, we obtain promising results that lay the basis on an unexplored new instance of the automatic analysis of natural language arguments.
[ "Ruiz-Dolz, Ramon", "Heras, Stella", "Garcia, Ana" ]
Automatic Debate Evaluation with Argumentation Semantics and Natural Language Argument Graph Networks
emnlp-main.368
2203.14647
[ "https://github.com/raruidol/argumentevaluation" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.369.bib
https://aclanthology.org/2023.emnlp-main.369/
@inproceedings{razumovskaia-etal-2023-transfer, title = "Transfer-Free Data-Efficient Multilingual Slot Labeling", author = "Razumovskaia, Evgeniia and Vuli{\'c}, Ivan and Korhonen, Anna", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.369", doi = "10.18653/v1/2023.emnlp-main.369", pages = "6041--6055", abstract = "Slot labeling (SL) is a core component of task-oriented dialogue (TOD) systems, where slots and corresponding values are usually language-, task- and domain-specific. Therefore, extending the system to any new language-domain-task configuration requires (re)running an expensive and resource-intensive data annotation process. To mitigate the inherent data scarcity issue, current research on multilingual ToD assumes that sufficient English-language annotated data are always available for particular tasks and domains, and thus operates in a standard cross-lingual transfer setup. In this work, we depart from this often unrealistic assumption. We examine challenging scenarios where such transfer-enabling English annotated data cannot be guaranteed, and focus on bootstrapping multilingual data-efficient slot labelers in transfer-free scenarios directly in the target languages without any English-ready data. We propose a two-stage slot labeling approach (termed TWOSL) which transforms standard multilingual sentence encoders into effective slot labelers. In Stage 1, relying on SL-adapted contrastive learning with only a handful of SL-annotated examples, we turn sentence encoders into task-specific span encoders. In Stage 2, we recast SL from a token classification into a simpler, less data-intensive span classification task. Our results on two standard multilingual TOD datasets and across diverse languages confirm the effectiveness and robustness of TWOSL. It is especially effective for the most challenging transfer-free few-shot setups, paving the way for quick and data-efficient bootstrapping of multilingual slot labelers for TOD.", }
Slot labeling (SL) is a core component of task-oriented dialogue (TOD) systems, where slots and corresponding values are usually language-, task- and domain-specific. Therefore, extending the system to any new language-domain-task configuration requires (re)running an expensive and resource-intensive data annotation process. To mitigate the inherent data scarcity issue, current research on multilingual ToD assumes that sufficient English-language annotated data are always available for particular tasks and domains, and thus operates in a standard cross-lingual transfer setup. In this work, we depart from this often unrealistic assumption. We examine challenging scenarios where such transfer-enabling English annotated data cannot be guaranteed, and focus on bootstrapping multilingual data-efficient slot labelers in transfer-free scenarios directly in the target languages without any English-ready data. We propose a two-stage slot labeling approach (termed TWOSL) which transforms standard multilingual sentence encoders into effective slot labelers. In Stage 1, relying on SL-adapted contrastive learning with only a handful of SL-annotated examples, we turn sentence encoders into task-specific span encoders. In Stage 2, we recast SL from a token classification into a simpler, less data-intensive span classification task. Our results on two standard multilingual TOD datasets and across diverse languages confirm the effectiveness and robustness of TWOSL. It is especially effective for the most challenging transfer-free few-shot setups, paving the way for quick and data-efficient bootstrapping of multilingual slot labelers for TOD.
[ "Razumovskaia, Evgeniia", "Vuli{\\'c}, Ivan", "Korhonen, Anna" ]
Transfer-Free Data-Efficient Multilingual Slot Labeling
emnlp-main.369
2305.13528
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.370.bib
https://aclanthology.org/2023.emnlp-main.370/
@inproceedings{yang-etal-2023-towards, title = "Towards Interpretable Mental Health Analysis with Large Language Models", author = "Yang, Kailai and Ji, Shaoxiong and Zhang, Tianlin and Xie, Qianqian and Kuang, Ziyan and Ananiadou, Sophia", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.370", doi = "10.18653/v1/2023.emnlp-main.370", pages = "6056--6077", abstract = "The latest large language models (LLMs) such as ChatGPT, exhibit strong capabilities in automated mental health analysis. However, existing relevant studies bear several limitations, including inadequate evaluations, lack of prompting strategies, and ignorance of exploring LLMs for explainability. To bridge these gaps, we comprehensively evaluate the mental health analysis and emotional reasoning ability of LLMs on 11 datasets across 5 tasks. We explore the effects of different prompting strategies with unsupervised and distantly supervised emotional information. Based on these prompts, we explore LLMs for interpretable mental health analysis by instructing them to generate explanations for each of their decisions. We convey strict human evaluations to assess the quality of the generated explanations, leading to a novel dataset with 163 human-assessed explanations. We benchmark existing automatic evaluation metrics on this dataset to guide future related works. According to the results, ChatGPT shows strong in-context learning ability but still has a significant gap with advanced task-specific methods. Careful prompt engineering with emotional cues and expert-written few-shot examples can also effectively improve performance on mental health analysis. In addition, ChatGPT generates explanations that approach human performance, showing its great potential in explainable mental health analysis.", }
The latest large language models (LLMs) such as ChatGPT, exhibit strong capabilities in automated mental health analysis. However, existing relevant studies bear several limitations, including inadequate evaluations, lack of prompting strategies, and ignorance of exploring LLMs for explainability. To bridge these gaps, we comprehensively evaluate the mental health analysis and emotional reasoning ability of LLMs on 11 datasets across 5 tasks. We explore the effects of different prompting strategies with unsupervised and distantly supervised emotional information. Based on these prompts, we explore LLMs for interpretable mental health analysis by instructing them to generate explanations for each of their decisions. We convey strict human evaluations to assess the quality of the generated explanations, leading to a novel dataset with 163 human-assessed explanations. We benchmark existing automatic evaluation metrics on this dataset to guide future related works. According to the results, ChatGPT shows strong in-context learning ability but still has a significant gap with advanced task-specific methods. Careful prompt engineering with emotional cues and expert-written few-shot examples can also effectively improve performance on mental health analysis. In addition, ChatGPT generates explanations that approach human performance, showing its great potential in explainable mental health analysis.
[ "Yang, Kailai", "Ji, Shaoxiong", "Zhang, Tianlin", "Xie, Qianqian", "Kuang, Ziyan", "Ananiadou, Sophia" ]
Towards Interpretable Mental Health Analysis with Large Language Models
emnlp-main.370
2304.03347
[ "https://github.com/stevekgyang/mentalllama" ]
https://huggingface.co/papers/2304.03347
0
0
0
6
[]
[]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.371.bib
https://aclanthology.org/2023.emnlp-main.371/
@inproceedings{lee-etal-2023-learning, title = "Learning to Rank Generation with Pairwise Partial Rewards", author = "Lee, Youngwon and Lee, Jinu and Hwang, Seung-won", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.371", doi = "10.18653/v1/2023.emnlp-main.371", pages = "6078--6092", abstract = "This paper studies the use of reinforcement learning for conditional text generation, which overcomes the limitation of the prevalent supervised maximum likelihood estimation approach. However, it still suffers from challenges including the large action space and the delayed reward, as the reward can be computed only after an entire sequence is generated. To address these challenges, we propose a method that provides partial rewards for intermediate actions taken on partial sequences. This enables the model to promptly prioritize actions that lead to the generation of more desirable sequences. Our method{'}s key contribution lies in its focus on distinguishing relatively more desirable actions rather than striving to precisely estimate pointwise values for arbitrary partial sequences. Instead, our model learns to discern the relative desirability between pairs of actions, or rank actions in a pairwise manner, only when necessary and feasible. This is materialized in an efficient way by leveraging the prefix tree constructed from the sampled sequences. Experimental results on paraphrase generation and constrained machine translation tasks showcase the effectiveness of our method.", }
This paper studies the use of reinforcement learning for conditional text generation, which overcomes the limitation of the prevalent supervised maximum likelihood estimation approach. However, it still suffers from challenges including the large action space and the delayed reward, as the reward can be computed only after an entire sequence is generated. To address these challenges, we propose a method that provides partial rewards for intermediate actions taken on partial sequences. This enables the model to promptly prioritize actions that lead to the generation of more desirable sequences. Our method{'}s key contribution lies in its focus on distinguishing relatively more desirable actions rather than striving to precisely estimate pointwise values for arbitrary partial sequences. Instead, our model learns to discern the relative desirability between pairs of actions, or rank actions in a pairwise manner, only when necessary and feasible. This is materialized in an efficient way by leveraging the prefix tree constructed from the sampled sequences. Experimental results on paraphrase generation and constrained machine translation tasks showcase the effectiveness of our method.
[ "Lee, Youngwon", "Lee, Jinu", "Hwang, Seung-won" ]
Learning to Rank Generation with Pairwise Partial Rewards
emnlp-main.371
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.372.bib
https://aclanthology.org/2023.emnlp-main.372/
@inproceedings{gao-etal-2023-greedycas, title = "{G}reedy{CAS}: Unsupervised Scientific Abstract Segmentation with Normalized Mutual Information", author = "Gao, Yingqiang and Lam, Jessica and Gu, Nianlong and Hahnloser, Richard", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.372", doi = "10.18653/v1/2023.emnlp-main.372", pages = "6093--6108", abstract = "The abstracts of scientific papers typically contain both premises (e.g., background and observations) and conclusions. Although conclusion sentences are highlighted in structured abstracts, in non-structured abstracts the concluding information is not explicitly marked, which makes the automatic segmentation of conclusions from scientific abstracts a challenging task. In this work, we explore Normalized Mutual Information (NMI) as a means for abstract segmentation. We consider each abstract as a recurrent cycle of sentences and place two segmentation boundaries by greedily optimizing the NMI score between the two segments, assuming that conclusions are strongly semantically linked with preceding premises. On non-structured abstracts, our proposed unsupervised approach GreedyCAS achieves the best performance across all evaluation metrics; on structured abstracts, GreedyCAS outperforms all baseline methods measured by $P_k$. The strong correlation of NMI to our evaluation metrics reveals the effectiveness of NMI for abstract segmentation.", }
The abstracts of scientific papers typically contain both premises (e.g., background and observations) and conclusions. Although conclusion sentences are highlighted in structured abstracts, in non-structured abstracts the concluding information is not explicitly marked, which makes the automatic segmentation of conclusions from scientific abstracts a challenging task. In this work, we explore Normalized Mutual Information (NMI) as a means for abstract segmentation. We consider each abstract as a recurrent cycle of sentences and place two segmentation boundaries by greedily optimizing the NMI score between the two segments, assuming that conclusions are strongly semantically linked with preceding premises. On non-structured abstracts, our proposed unsupervised approach GreedyCAS achieves the best performance across all evaluation metrics; on structured abstracts, GreedyCAS outperforms all baseline methods measured by $P_k$. The strong correlation of NMI to our evaluation metrics reveals the effectiveness of NMI for abstract segmentation.
[ "Gao, Yingqiang", "Lam, Jessica", "Gu, Nianlong", "Hahnloser, Richard" ]
GreedyCAS: Unsupervised Scientific Abstract Segmentation with Normalized Mutual Information
emnlp-main.372
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.373.bib
https://aclanthology.org/2023.emnlp-main.373/
@inproceedings{tran-etal-2023-spoiler, title = "Spoiler Detection as Semantic Text Matching", author = "Tran, Ryan and Xu, Canwen and McAuley, Julian", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.373", doi = "10.18653/v1/2023.emnlp-main.373", pages = "6109--6113", abstract = "Engaging with discussion of TV shows online often requires individuals to refrain from consuming show-related content for extended periods to avoid spoilers. While existing research on spoiler detection shows promising results in safeguarding viewers from general spoilers, it fails to address the issue of users abstaining from show-related content during their watch. This is primarily because the definition of a spoiler varies depending on the viewer{'}s progress in the show, and conventional spoiler detection methods lack the granularity to capture this complexity. To tackle this challenge, we propose the task of spoiler matching, which involves assigning an episode number to a spoiler given a specific TV show. We frame this task as semantic text matching and introduce a dataset comprised of comments and episode summaries to evaluate model performance. Given the length of each example, our dataset can also serve as a benchmark for long-range language models.", }
Engaging with discussion of TV shows online often requires individuals to refrain from consuming show-related content for extended periods to avoid spoilers. While existing research on spoiler detection shows promising results in safeguarding viewers from general spoilers, it fails to address the issue of users abstaining from show-related content during their watch. This is primarily because the definition of a spoiler varies depending on the viewer{'}s progress in the show, and conventional spoiler detection methods lack the granularity to capture this complexity. To tackle this challenge, we propose the task of spoiler matching, which involves assigning an episode number to a spoiler given a specific TV show. We frame this task as semantic text matching and introduce a dataset comprised of comments and episode summaries to evaluate model performance. Given the length of each example, our dataset can also serve as a benchmark for long-range language models.
[ "Tran, Ryan", "Xu, Canwen", "McAuley, Julian" ]
Spoiler Detection as Semantic Text Matching
emnlp-main.373
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.374.bib
https://aclanthology.org/2023.emnlp-main.374/
@inproceedings{padmakumar-etal-2023-multimodal, title = "Multimodal Embodied Plan Prediction Augmented with Synthetic Embodied Dialogue", author = "Padmakumar, Aishwarya and Inan, Mert and Gella, Spandana and Lange, Patrick and Hakkani-Tur, Dilek", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.374", doi = "10.18653/v1/2023.emnlp-main.374", pages = "6114--6131", abstract = "Embodied task completion is a challenge where an agent in a simulated environment must predict environment actions to complete tasks based on natural language instructions and ego-centric visual observations. We propose a variant of this problem where the agent predicts actions at a higher level of abstraction called a plan, which helps make agent actions more interpretable and can be obtained from the appropriate prompting of large language models. We show that multimodal transformer models can outperform language-only models for this problem but fall significantly short of oracle plans. Since collecting human-human dialogues for embodied environments is expensive and time-consuming, we propose a method to synthetically generate such dialogues, which we then use as training data for plan prediction. We demonstrate that multimodal transformer models can attain strong zero-shot performance from our synthetic data, outperforming language-only models trained on human-human data.", }
Embodied task completion is a challenge where an agent in a simulated environment must predict environment actions to complete tasks based on natural language instructions and ego-centric visual observations. We propose a variant of this problem where the agent predicts actions at a higher level of abstraction called a plan, which helps make agent actions more interpretable and can be obtained from the appropriate prompting of large language models. We show that multimodal transformer models can outperform language-only models for this problem but fall significantly short of oracle plans. Since collecting human-human dialogues for embodied environments is expensive and time-consuming, we propose a method to synthetically generate such dialogues, which we then use as training data for plan prediction. We demonstrate that multimodal transformer models can attain strong zero-shot performance from our synthetic data, outperforming language-only models trained on human-human data.
[ "Padmakumar, Aishwarya", "Inan, Mert", "Gella, Sp", "ana", "Lange, Patrick", "Hakkani-Tur, Dilek" ]
Multimodal Embodied Plan Prediction Augmented with Synthetic Embodied Dialogue
emnlp-main.374
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.375.bib
https://aclanthology.org/2023.emnlp-main.375/
@inproceedings{shao-etal-2023-gem, title = "{GEM}: Gestalt Enhanced Markup Language Model for Web Understanding via Render Tree", author = "Shao, Zirui and Gao, Feiyu and Qi, Zhongda and Xing, Hangdi and Bu, Jiajun and Yu, Zhi and Zheng, Qi and Liu, Xiaozhong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.375", doi = "10.18653/v1/2023.emnlp-main.375", pages = "6132--6145", abstract = "Inexhaustible web content carries abundant perceptible information beyond text. Unfortunately, most prior efforts in pre-trained Language Models (LMs) ignore such cyber-richness, while few of them only employ plain HTMLs, and crucial information in the rendered web, such as visual, layout, and style, are excluded. Intuitively, those perceptible web information can provide essential intelligence to facilitate content understanding tasks. This study presents an innovative Gestalt Enhanced Markup (GEM) Language Model inspired by Gestalt psychological theory for hosting heterogeneous visual information from the render tree into the language model without requiring additional visual input. Comprehensive experiments on multiple downstream tasks, i.e., web question answering and web information extraction, validate GEM superiority.", }
Inexhaustible web content carries abundant perceptible information beyond text. Unfortunately, most prior efforts in pre-trained Language Models (LMs) ignore such cyber-richness, while few of them only employ plain HTMLs, and crucial information in the rendered web, such as visual, layout, and style, are excluded. Intuitively, those perceptible web information can provide essential intelligence to facilitate content understanding tasks. This study presents an innovative Gestalt Enhanced Markup (GEM) Language Model inspired by Gestalt psychological theory for hosting heterogeneous visual information from the render tree into the language model without requiring additional visual input. Comprehensive experiments on multiple downstream tasks, i.e., web question answering and web information extraction, validate GEM superiority.
[ "Shao, Zirui", "Gao, Feiyu", "Qi, Zhongda", "Xing, Hangdi", "Bu, Jiajun", "Yu, Zhi", "Zheng, Qi", "Liu, Xiaozhong" ]
GEM: Gestalt Enhanced Markup Language Model for Web Understanding via Render Tree
emnlp-main.375
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.376.bib
https://aclanthology.org/2023.emnlp-main.376/
@inproceedings{pei-etal-2023-abstractive, title = "Abstractive Open Information Extraction", author = "Pei, Kevin and Jindal, Ishan and Chang, Kevin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.376", doi = "10.18653/v1/2023.emnlp-main.376", pages = "6146--6158", abstract = "Open Information Extraction (OpenIE) is a traditional NLP task that extracts structured information from unstructured text to be used for other downstream applications. Traditionally, OpenIE focuses on extracting the surface forms of relations as they appear in the raw text, which we term extractive OpenIE. One of the main drawbacks of this approach is that implicit semantic relations (inferred relations) can not be extracted, compromising the performance of downstream applications. In this paper, we broaden the scope of OpenIE relations from merely the surface form of relations to include inferred relations, which we term abstractive OpenIE. This new task calls for the development of a new abstractive OpenIE training dataset and a baseline neural model that can extract those inferred relations. We also demonstrate the necessity for a new semantics-based metric for evaluating abstractive OpenIE extractions. Via a case study on Complex QA, we demonstrate the effectiveness of abstractive OpenIE.", }
Open Information Extraction (OpenIE) is a traditional NLP task that extracts structured information from unstructured text to be used for other downstream applications. Traditionally, OpenIE focuses on extracting the surface forms of relations as they appear in the raw text, which we term extractive OpenIE. One of the main drawbacks of this approach is that implicit semantic relations (inferred relations) can not be extracted, compromising the performance of downstream applications. In this paper, we broaden the scope of OpenIE relations from merely the surface form of relations to include inferred relations, which we term abstractive OpenIE. This new task calls for the development of a new abstractive OpenIE training dataset and a baseline neural model that can extract those inferred relations. We also demonstrate the necessity for a new semantics-based metric for evaluating abstractive OpenIE extractions. Via a case study on Complex QA, we demonstrate the effectiveness of abstractive OpenIE.
[ "Pei, Kevin", "Jindal, Ishan", "Chang, Kevin" ]
Abstractive Open Information Extraction
emnlp-main.376
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.377.bib
https://aclanthology.org/2023.emnlp-main.377/
@inproceedings{ghosh-etal-2023-cosyn, title = "{C}o{S}yn: Detecting Implicit Hate Speech in Online Conversations Using a Context Synergized Hyperbolic Network", author = "Ghosh, Sreyan and Suri, Manan and Chiniya, Purva and Tyagi, Utkarsh and Kumar, Sonal and Manocha, Dinesh", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.377", doi = "10.18653/v1/2023.emnlp-main.377", pages = "6159--6173", abstract = "The tremendous growth of social media users interacting in online conversations has led to significant growth in hate speech affecting people from various demographics. Most of the prior works focus on detecting explicit hate speech, which is overt and leverages hateful phrases, with very little work focusing on detecting hate speech that is implicit or denotes hatred through indirect or coded language. In this paper, we present CoSyn, a context synergized neural network that explicitly incorporates user- and conversational-context for detecting implicit hate speech in online conversations. CoSyn introduces novel ways to encode these external contexts and employs a novel context interaction mechanism that clearly captures the interplay between them, making independent assessments of the amounts of information to be retrieved from these noisy contexts. Additionally, it carries out all these operations in the hyperbolic space to account for the scale-free dynamics of social media. We demonstrate the effectiveness of CoSyn on 6 hate speech datasets and show that CoSyn outperforms all our baselines in detecting implicit hate speech with absolute improvements in the range of 1.24{\%} - 57.8{\%}. We make our code available.", }
The tremendous growth of social media users interacting in online conversations has led to significant growth in hate speech affecting people from various demographics. Most of the prior works focus on detecting explicit hate speech, which is overt and leverages hateful phrases, with very little work focusing on detecting hate speech that is implicit or denotes hatred through indirect or coded language. In this paper, we present CoSyn, a context synergized neural network that explicitly incorporates user- and conversational-context for detecting implicit hate speech in online conversations. CoSyn introduces novel ways to encode these external contexts and employs a novel context interaction mechanism that clearly captures the interplay between them, making independent assessments of the amounts of information to be retrieved from these noisy contexts. Additionally, it carries out all these operations in the hyperbolic space to account for the scale-free dynamics of social media. We demonstrate the effectiveness of CoSyn on 6 hate speech datasets and show that CoSyn outperforms all our baselines in detecting implicit hate speech with absolute improvements in the range of 1.24{\%} - 57.8{\%}. We make our code available.
[ "Ghosh, Sreyan", "Suri, Manan", "Chiniya, Purva", "Tyagi, Utkarsh", "Kumar, Sonal", "Manocha, Dinesh" ]
CoSyn: Detecting Implicit Hate Speech in Online Conversations Using a Context Synergized Hyperbolic Network
emnlp-main.377
2303.03387
[ "https://github.com/sreyan88/cosyn" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.378.bib
https://aclanthology.org/2023.emnlp-main.378/
@inproceedings{ye-etal-2023-cleme, title = "{CLEME}: Debiasing Multi-reference Evaluation for Grammatical Error Correction", author = "Ye, Jingheng and Li, Yinghui and Zhou, Qingyu and Li, Yangning and Ma, Shirong and Zheng, Hai-Tao and Shen, Ying", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.378", doi = "10.18653/v1/2023.emnlp-main.378", pages = "6174--6189", abstract = "Evaluating the performance of Grammatical Error Correction (GEC) systems is a challenging task due to its subjectivity. Designing an evaluation metric that is as objective as possible is crucial to the development of GEC task. However, mainstream evaluation metrics, i.e., reference-based metrics, introduce bias into the multi-reference evaluation by extracting edits without considering the presence of multiple references. To overcome this issue, we propose Chunk-LE Multi-reference Evaluation (CLEME), designed to evaluate GEC systems in the multi-reference evaluation setting. CLEME builds chunk sequences with consistent boundaries for the source, the hypothesis and references, thus eliminating the bias caused by inconsistent edit boundaries. Furthermore, we observe the consistent boundary could also act as the boundary of grammatical errors, based on which the F$_{0.5}$ score is then computed following the correction independence assumption. We conduct experiments on six English reference sets based on the CoNLL-2014 shared task. Extensive experiments and detailed analyses demonstrate the correctness of our discovery and the effectiveness of CLEME. Further analysis reveals that CLEME is robust to evaluate GEC systems across reference sets with varying numbers of references and annotation styles. All the source codes of CLEME are released at https://github.com/THUKElab/CLEME.", }
Evaluating the performance of Grammatical Error Correction (GEC) systems is a challenging task due to its subjectivity. Designing an evaluation metric that is as objective as possible is crucial to the development of GEC task. However, mainstream evaluation metrics, i.e., reference-based metrics, introduce bias into the multi-reference evaluation by extracting edits without considering the presence of multiple references. To overcome this issue, we propose Chunk-LE Multi-reference Evaluation (CLEME), designed to evaluate GEC systems in the multi-reference evaluation setting. CLEME builds chunk sequences with consistent boundaries for the source, the hypothesis and references, thus eliminating the bias caused by inconsistent edit boundaries. Furthermore, we observe the consistent boundary could also act as the boundary of grammatical errors, based on which the F$_{0.5}$ score is then computed following the correction independence assumption. We conduct experiments on six English reference sets based on the CoNLL-2014 shared task. Extensive experiments and detailed analyses demonstrate the correctness of our discovery and the effectiveness of CLEME. Further analysis reveals that CLEME is robust to evaluate GEC systems across reference sets with varying numbers of references and annotation styles. All the source codes of CLEME are released at https://github.com/THUKElab/CLEME.
[ "Ye, Jingheng", "Li, Yinghui", "Zhou, Qingyu", "Li, Yangning", "Ma, Shirong", "Zheng, Hai-Tao", "Shen, Ying" ]
CLEME: Debiasing Multi-reference Evaluation for Grammatical Error Correction
emnlp-main.378
2305.10819
[ "https://github.com/yejh123/cleme" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.379.bib
https://aclanthology.org/2023.emnlp-main.379/
@inproceedings{kamp-etal-2023-dynamic, title = "Dynamic Top-k Estimation Consolidates Disagreement between Feature Attribution Methods", author = "Kamp, Jonathan and Beinborn, Lisa and Fokkens, Antske", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.379", doi = "10.18653/v1/2023.emnlp-main.379", pages = "6190--6197", abstract = "Feature attribution scores are used for explaining the prediction of a text classifier to users by highlighting a k number of tokens. In this work, we propose a way to determine the number of optimal k tokens that should be displayed from sequential properties of the attribution scores. Our approach is dynamic across sentences, method-agnostic, and deals with sentence length bias. We compare agreement between multiple methods and humans on an NLI task, using fixed k and dynamic k. We find that perturbation-based methods and Vanilla Gradient exhibit highest agreement on most method{--}method and method{--}human agreement metrics with a static k. Their advantage over other methods disappears with dynamic ks which mainly improve Integrated Gradient and GradientXInput. To our knowledge, this is the first evidence that sequential properties of attribution scores are informative for consolidating attribution signals for human interpretation.", }
Feature attribution scores are used for explaining the prediction of a text classifier to users by highlighting a k number of tokens. In this work, we propose a way to determine the number of optimal k tokens that should be displayed from sequential properties of the attribution scores. Our approach is dynamic across sentences, method-agnostic, and deals with sentence length bias. We compare agreement between multiple methods and humans on an NLI task, using fixed k and dynamic k. We find that perturbation-based methods and Vanilla Gradient exhibit highest agreement on most method{--}method and method{--}human agreement metrics with a static k. Their advantage over other methods disappears with dynamic ks which mainly improve Integrated Gradient and GradientXInput. To our knowledge, this is the first evidence that sequential properties of attribution scores are informative for consolidating attribution signals for human interpretation.
[ "Kamp, Jonathan", "Beinborn, Lisa", "Fokkens, Antske" ]
Dynamic Top-k Estimation Consolidates Disagreement between Feature Attribution Methods
emnlp-main.379
2310.05619
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.380.bib
https://aclanthology.org/2023.emnlp-main.380/
@inproceedings{wu-etal-2023-sentistream, title = "{S}enti{S}tream: A Co-Training Framework for Adaptive Online Sentiment Analysis in Evolving Data Streams", author = "Wu, Yuhao and Sharma, Karthick and Seah, Chun and Zhang, Shuhao", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.380", doi = "10.18653/v1/2023.emnlp-main.380", pages = "6198--6212", abstract = "Online sentiment analysis has emerged as a crucial component in numerous data-driven applications, including social media monitoring, customer feedback analysis, and online reputation management. Despite their importance, current methodologies falter in effectively managing the continuously evolving nature of data streams, largely due to their reliance on substantial, pre-existing labelled datasets. This paper presents $\textbf{sentistream}$, a novel co-training framework specifically designed for efficient sentiment analysis within dynamic data streams. Comprising unsupervised, semi-supervised, and stream merge modules, $\textbf{ sentistream}$ guarantees constant adaptability to evolving data landscapes. This research delves into the continuous adaptation of language models for online sentiment analysis, focusing on real-world applications. Experimental evaluations using data streams derived from three benchmark sentiment analysis datasets confirm that our proposed methodology surpasses existing approaches in terms of both accuracy and computational efficiency.", }
Online sentiment analysis has emerged as a crucial component in numerous data-driven applications, including social media monitoring, customer feedback analysis, and online reputation management. Despite their importance, current methodologies falter in effectively managing the continuously evolving nature of data streams, largely due to their reliance on substantial, pre-existing labelled datasets. This paper presents $\textbf{sentistream}$, a novel co-training framework specifically designed for efficient sentiment analysis within dynamic data streams. Comprising unsupervised, semi-supervised, and stream merge modules, $\textbf{ sentistream}$ guarantees constant adaptability to evolving data landscapes. This research delves into the continuous adaptation of language models for online sentiment analysis, focusing on real-world applications. Experimental evaluations using data streams derived from three benchmark sentiment analysis datasets confirm that our proposed methodology surpasses existing approaches in terms of both accuracy and computational efficiency.
[ "Wu, Yuhao", "Sharma, Karthick", "Seah, Chun", "Zhang, Shuhao" ]
SentiStream: A Co-Training Framework for Adaptive Online Sentiment Analysis in Evolving Data Streams
emnlp-main.380
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.381.bib
https://aclanthology.org/2023.emnlp-main.381/
@inproceedings{zhang-etal-2023-hypernetwork, title = "{H}yper{N}etwork-based Decoupling to Improve Model Generalization for Few-Shot Relation Extraction", author = "Zhang, Liang and Zhou, Chulun and Meng, Fandong and Su, Jinsong and Chen, Yidong and Zhou, Jie", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.381", doi = "10.18653/v1/2023.emnlp-main.381", pages = "6213--6223", abstract = "Few-shot relation extraction (FSRE) aims to train a model that can deal with new relations using only a few labeled examples. Most existing studies employ Prototypical Networks for FSRE, which usually overfits the relation classes in the training set and cannot generalize well to unseen relations. By investigating the class separation of an FSRE model, we find that model upper layers are prone to learn relation-specific knowledge. Therefore, in this paper, we propose a HyperNetwork-based Decoupling approach to improve the generalization of FSRE models. Specifically, our model consists of an encoder, a network generator (for producing relation classifiers) and the produced-then-finetuned classifiers for every N-way-K-shot episode. Meanwhile, we design a two-step training framework along with a class-agnostic aligner, in which the generated classifiers focus on acquiring relation-specific knowledge and the encoder is encouraged to learn more general relation knowledge. In this way, the roles of upper and lower layers in an FSRE model are explicitly decoupled, thus enhancing its generalizing capability during testing. Experiments on two public datasets demonstrate the effectiveness of our method.", }
Few-shot relation extraction (FSRE) aims to train a model that can deal with new relations using only a few labeled examples. Most existing studies employ Prototypical Networks for FSRE, which usually overfits the relation classes in the training set and cannot generalize well to unseen relations. By investigating the class separation of an FSRE model, we find that model upper layers are prone to learn relation-specific knowledge. Therefore, in this paper, we propose a HyperNetwork-based Decoupling approach to improve the generalization of FSRE models. Specifically, our model consists of an encoder, a network generator (for producing relation classifiers) and the produced-then-finetuned classifiers for every N-way-K-shot episode. Meanwhile, we design a two-step training framework along with a class-agnostic aligner, in which the generated classifiers focus on acquiring relation-specific knowledge and the encoder is encouraged to learn more general relation knowledge. In this way, the roles of upper and lower layers in an FSRE model are explicitly decoupled, thus enhancing its generalizing capability during testing. Experiments on two public datasets demonstrate the effectiveness of our method.
[ "Zhang, Liang", "Zhou, Chulun", "Meng, F", "ong", "Su, Jinsong", "Chen, Yidong", "Zhou, Jie" ]
HyperNetwork-based Decoupling to Improve Model Generalization for Few-Shot Relation Extraction
emnlp-main.381
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.382.bib
https://aclanthology.org/2023.emnlp-main.382/
@inproceedings{kumar-schockaert-2023-solving, title = "Solving Hard Analogy Questions with Relation Embedding Chains", author = "Kumar, Nitesh and Schockaert, Steven", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.382", doi = "10.18653/v1/2023.emnlp-main.382", pages = "6224--6236", abstract = "Modelling how concepts are related is a central topic in Lexical Semantics. A common strategy is to rely on knowledge graphs (KGs) such as ConceptNet, and to model the relation between two concepts as a set of paths. However, KGs are limited to a fixed set of relation types, and they are incomplete and often noisy. Another strategy is to distill relation embeddings from a fine-tuned language model. However, this is less suitable for words that are only indirectly related and it does not readily allow us to incorporate structured domain knowledge. In this paper, we aim to combine the best of both worlds. We model relations as paths but associate their edges with relation embeddings. The paths are obtained by first identifying suitable intermediate words and then selecting those words for which informative relation embeddings can be obtained. We empirically show that our proposed representations are useful for solving hard analogy questions.", }
Modelling how concepts are related is a central topic in Lexical Semantics. A common strategy is to rely on knowledge graphs (KGs) such as ConceptNet, and to model the relation between two concepts as a set of paths. However, KGs are limited to a fixed set of relation types, and they are incomplete and often noisy. Another strategy is to distill relation embeddings from a fine-tuned language model. However, this is less suitable for words that are only indirectly related and it does not readily allow us to incorporate structured domain knowledge. In this paper, we aim to combine the best of both worlds. We model relations as paths but associate their edges with relation embeddings. The paths are obtained by first identifying suitable intermediate words and then selecting those words for which informative relation embeddings can be obtained. We empirically show that our proposed representations are useful for solving hard analogy questions.
[ "Kumar, Nitesh", "Schockaert, Steven" ]
Solving Hard Analogy Questions with Relation Embedding Chains
emnlp-main.382
2310.12379
[ "https://github.com/niteshroyal/solvinghardanalogyquestions" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.383.bib
https://aclanthology.org/2023.emnlp-main.383/
@inproceedings{shen-etal-2023-modeling, title = "Modeling Empathic Similarity in Personal Narratives", author = "Shen, Jocelyn and Sap, Maarten and Colon-Hernandez, Pedro and Park, Hae and Breazeal, Cynthia", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.383", doi = "10.18653/v1/2023.emnlp-main.383", pages = "6237--6252", abstract = "The most meaningful connections between people are often fostered through expression of shared vulnerability and emotional experiences in personal narratives. We introduce a new task of identifying similarity in personal stories based on empathic resonance, i.e., the extent to which two people empathize with each others{'} experiences, as opposed to raw semantic or lexical similarity, as has predominantly been studied in NLP. Using insights from social psychology, we craft a framework that operationalizes empathic similarity in terms of three key features of stories: main events, emotional trajectories, and overall morals or takeaways. We create EmpathicStories, a dataset of 1,500 personal stories annotated with our empathic similarity features, and 2,000 pairs of stories annotated with empathic similarity scores. Using our dataset, we fine-tune a model to compute empathic similarity of story pairs, and show that this outperforms semantic similarity models on automated correlation and retrieval metrics. Through a user study with 150 participants, we also assess the effect our model has on retrieving stories that users empathize with, compared to naive semantic similarity-based retrieval, and find that participants empathized significantly more with stories retrieved by our model. Our work has strong implications for the use of empathy-aware models to foster human connection and empathy between people.", }
The most meaningful connections between people are often fostered through expression of shared vulnerability and emotional experiences in personal narratives. We introduce a new task of identifying similarity in personal stories based on empathic resonance, i.e., the extent to which two people empathize with each others{'} experiences, as opposed to raw semantic or lexical similarity, as has predominantly been studied in NLP. Using insights from social psychology, we craft a framework that operationalizes empathic similarity in terms of three key features of stories: main events, emotional trajectories, and overall morals or takeaways. We create EmpathicStories, a dataset of 1,500 personal stories annotated with our empathic similarity features, and 2,000 pairs of stories annotated with empathic similarity scores. Using our dataset, we fine-tune a model to compute empathic similarity of story pairs, and show that this outperforms semantic similarity models on automated correlation and retrieval metrics. Through a user study with 150 participants, we also assess the effect our model has on retrieving stories that users empathize with, compared to naive semantic similarity-based retrieval, and find that participants empathized significantly more with stories retrieved by our model. Our work has strong implications for the use of empathy-aware models to foster human connection and empathy between people.
[ "Shen, Jocelyn", "Sap, Maarten", "Colon-Hern", "ez, Pedro", "Park, Hae", "Breazeal, Cynthia" ]
Modeling Empathic Similarity in Personal Narratives
emnlp-main.383
2305.14246
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.384.bib
https://aclanthology.org/2023.emnlp-main.384/
@inproceedings{singh-etal-2023-tree, title = "Tree Prompting: Efficient Task Adaptation without Fine-Tuning", author = "Singh, Chandan and Morris, John and Rush, Alexander and Gao, Jianfeng and Deng, Yuntian", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.384", doi = "10.18653/v1/2023.emnlp-main.384", pages = "6253--6267", abstract = "Prompting language models (LMs) is the main interface for applying them to new tasks. However, for smaller LMs, prompting provides low accuracy compared to gradient-based fine-tuning. Tree Prompting is an approach to prompting which builds a decision tree of prompts, linking multiple prompt-LM calls together to solve a task. At inference time, each call to the LM is determined by efficiently routing the outcome of the previous call using the tree. Experiments on classification datasets show that Tree Prompting improves accuracy over competing methods and is competitive with fine-tuning. We also show that variants of Tree Prompting allow inspection of a model{'}s decision-making process.", }
Prompting language models (LMs) is the main interface for applying them to new tasks. However, for smaller LMs, prompting provides low accuracy compared to gradient-based fine-tuning. Tree Prompting is an approach to prompting which builds a decision tree of prompts, linking multiple prompt-LM calls together to solve a task. At inference time, each call to the LM is determined by efficiently routing the outcome of the previous call using the tree. Experiments on classification datasets show that Tree Prompting improves accuracy over competing methods and is competitive with fine-tuning. We also show that variants of Tree Prompting allow inspection of a model{'}s decision-making process.
[ "Singh, Ch", "an", "Morris, John", "Rush, Alex", "er", "Gao, Jianfeng", "Deng, Yuntian" ]
Tree Prompting: Efficient Task Adaptation without Fine-Tuning
emnlp-main.384
2310.14034
[ "https://github.com/csinva/tree-prompt" ]
https://huggingface.co/papers/2310.14034
2
2
0
5
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.385.bib
https://aclanthology.org/2023.emnlp-main.385/
@inproceedings{xu-etal-2023-baize, title = "Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data", author = "Xu, Canwen and Guo, Daya and Duan, Nan and McAuley, Julian", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.385", doi = "10.18653/v1/2023.emnlp-main.385", pages = "6268--6278", abstract = "Chat models, such as ChatGPT, have shown impressive capabilities and have been rapidly adopted across numerous domains. However, these models are only accessible through a restricted API, creating barriers for new research and progress in the field. We propose a pipeline that can automatically generate a high-quality multi-turn chat corpus by leveraging ChatGPT to engage in a conversation with itself. Subsequently, we employ parameter-efficient tuning to enhance LLaMA, an open-source large language model. The resulting model, named Baize, demonstrates good performance in multi-turn dialogues with guardrails that minimize potential risks. Additionally, we propose a new technique called Self-Distill with Feedback, to further improve the performance of the Baize models with feedback from ChatGPT.", }
Chat models, such as ChatGPT, have shown impressive capabilities and have been rapidly adopted across numerous domains. However, these models are only accessible through a restricted API, creating barriers for new research and progress in the field. We propose a pipeline that can automatically generate a high-quality multi-turn chat corpus by leveraging ChatGPT to engage in a conversation with itself. Subsequently, we employ parameter-efficient tuning to enhance LLaMA, an open-source large language model. The resulting model, named Baize, demonstrates good performance in multi-turn dialogues with guardrails that minimize potential risks. Additionally, we propose a new technique called Self-Distill with Feedback, to further improve the performance of the Baize models with feedback from ChatGPT.
[ "Xu, Canwen", "Guo, Daya", "Duan, Nan", "McAuley, Julian" ]
Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data
emnlp-main.385
2304.01196
[ "https://github.com/project-baize/baize" ]
https://huggingface.co/papers/2304.01196
1
0
0
4
[ "project-baize/baize-v2-7b", "project-baize/baize-v2-13b", "TheBloke/Project-Baize-v2-13B-GPTQ", "TheBloke/Project-Baize-v2-13B-GGML", "TheBloke/Project-Baize-v2-7B-GPTQ", "TheBloke/Project-Baize-v2-7B-GGML", "TheBloke/Baize-v2-13B-SuperHOT-8K-GGML", "TheBloke/Baize-v2-7B-SuperHOT-8K-GGML", "TheBloke/Baize-v2-13B-SuperHOT-8K-GPTQ", "TheBloke/Baize-v2-13B-SuperHOT-8K-fp16", "TheBloke/Baize-v2-7B-SuperHOT-8K-fp16", "TheBloke/Baize-v2-7B-SuperHOT-8K-GPTQ" ]
[ "bkai-foundation-models/vi-self-chat-sharegpt-format", "andreabac3/Quora-Italian-Fauno-Baize", "andreabac3/MedQuaAD-Italian-Fauno-Baize", "andreabac3/StackOverflow-Italian-Fauno-Baize" ]
[ "open-llm-leaderboard/open_llm_leaderboard", "project-baize/chat-with-baize", "Intel/low_bit_open_llm_leaderboard", "BAAI/open_cn_llm_leaderboard", "gsaivinay/open_llm_leaderboard", "HuggingFaceH4/Falcon-vs-LLaMA", "meval/multilingual-chatbot-arena-leaderboard", "GTBench/GTBench", "llm-blender/LLM-Blender", "felixz/open_llm_leaderboard", "OPTML-Group/UnlearnCanvas-Benchmark", "li-qing/FIRE", "Vikhrmodels/small-shlepa-lb", "b1sheng/kg_llm_leaderboard_test", "neubla/neubla-llm-evaluation-board", "rodrigomasini/data_only_open_llm_leaderboard", "Docfile/open_llm_leaderboard", "IELTS8/ISF", "tianleliphoebe/visual-arena", "Ashmal/MobiLlama", "smothiki/open_llm_leaderboard", "0x1668/open_llm_leaderboard", "pngwn/open_llm_leaderboard-check", "asir0z/open_llm_leaderboard", "kbmlcoding/open_llm_leaderboard_free", "aichampions/open_llm_leaderboard", "Adeco/open_llm_leaderboard", "anirudh937/open_llm_leaderboard", "smothiki/open_llm_leaderboard2", "pngwn/open_llm_leaderboard", "pngwn/open_llm_leaderboard_two", "choco9966/LeaderboardTest", "choco9966/open-ko-llm-leaderboard", "dbasu/multilingual-chatbot-arena-leaderboard", "alexkueck/ChatBotLI2Klein", "alexkueck/LIStarCode", "hugo1234/galileo", "iamrobotbear/chat-test", "ibagur/gbv_langchain_bot", "iamAI123/podsum", "alexkueck/LIFineTuned", "alexshengzhili/calahealthgpt", "BulatF/llama_test", "Bofeee5675/FIRE", "evelyn-lo/evelyn", "yuantao-infini-ai/demo_test" ]
1
Poster
https://aclanthology.org/2023.emnlp-main.386.bib
https://aclanthology.org/2023.emnlp-main.386/
@inproceedings{jiang-etal-2023-empathy, title = "Empathy Intent Drives Empathy Detection", author = "Jiang, Liting and Wu, Di and Mao, Bohui and Li, Yanbing and Slamu, Wushour", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.386", doi = "10.18653/v1/2023.emnlp-main.386", pages = "6279--6290", abstract = "Empathy plays an important role in the human dialogue. Detecting the empathetic direction expressed by the user is necessary for empathetic dialogue systems because it is highly relevant to understanding the user{'}s needs. Several studies have shown that empathy intent information improves the ability to response capacity of empathetic dialogue. However, the interaction between empathy detection and empathy intent recognition has not been explored. To this end, we invite 3 experts to manually annotate the healthy empathy detection datasets IEMPATHIZE and TwittEmp with 8 empathy intent labels, and perform joint training for the two tasks. Empirical study has shown that the introduction of empathy intent recognition task can improve the accuracy of empathy detection task, and we analyze possible reasons for this improvement. To make joint training of the two tasks more challenging, we propose a novel framework, Cascaded Label Signal Network, which uses the cascaded interactive attention module and the label signal enhancement module to capture feature exchange information between empathy and empathy intent representations. Experimental results show that our framework outperforms all baselines under both settings on the two datasets.", }
Empathy plays an important role in the human dialogue. Detecting the empathetic direction expressed by the user is necessary for empathetic dialogue systems because it is highly relevant to understanding the user{'}s needs. Several studies have shown that empathy intent information improves the ability to response capacity of empathetic dialogue. However, the interaction between empathy detection and empathy intent recognition has not been explored. To this end, we invite 3 experts to manually annotate the healthy empathy detection datasets IEMPATHIZE and TwittEmp with 8 empathy intent labels, and perform joint training for the two tasks. Empirical study has shown that the introduction of empathy intent recognition task can improve the accuracy of empathy detection task, and we analyze possible reasons for this improvement. To make joint training of the two tasks more challenging, we propose a novel framework, Cascaded Label Signal Network, which uses the cascaded interactive attention module and the label signal enhancement module to capture feature exchange information between empathy and empathy intent representations. Experimental results show that our framework outperforms all baselines under both settings on the two datasets.
[ "Jiang, Liting", "Wu, Di", "Mao, Bohui", "Li, Yanbing", "Slamu, Wushour" ]
Empathy Intent Drives Empathy Detection
emnlp-main.386
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.387.bib
https://aclanthology.org/2023.emnlp-main.387/
@inproceedings{shi-etal-2023-adaptive, title = "Adaptive End-to-End Metric Learning for Zero-Shot Cross-Domain Slot Filling", author = "Shi, Yuanjun and Wu, Linzhi and Shao, Minglai", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.387", doi = "10.18653/v1/2023.emnlp-main.387", pages = "6291--6301", abstract = "Recently slot filling has witnessed great development thanks to deep learning and the availability of large-scale annotated data. However, it poses a critical challenge to handle a novel domain whose samples are never seen during training. The recognition performance might be greatly degraded due to severe domain shifts. Most prior works deal with this problem in a two-pass pipeline manner based on metric learning. In practice, these dominant pipeline models may be limited in computational efficiency and generalization capacity because of non-parallel inference and context-free discrete label embeddings. To this end, we re-examine the typical metric-based methods, and propose a new adaptive end-to-end metric learning scheme for the challenging zero-shot slot filling. Considering simplicity, efficiency and generalizability, we present a cascade-style joint learning framework coupled with context-aware soft label representations and slot-level contrastive representation learning to mitigate the data and label shift problems effectively. Extensive experiments on public benchmarks demonstrate the superiority of the proposed approach over a series of competitive baselines.", }
Recently slot filling has witnessed great development thanks to deep learning and the availability of large-scale annotated data. However, it poses a critical challenge to handle a novel domain whose samples are never seen during training. The recognition performance might be greatly degraded due to severe domain shifts. Most prior works deal with this problem in a two-pass pipeline manner based on metric learning. In practice, these dominant pipeline models may be limited in computational efficiency and generalization capacity because of non-parallel inference and context-free discrete label embeddings. To this end, we re-examine the typical metric-based methods, and propose a new adaptive end-to-end metric learning scheme for the challenging zero-shot slot filling. Considering simplicity, efficiency and generalizability, we present a cascade-style joint learning framework coupled with context-aware soft label representations and slot-level contrastive representation learning to mitigate the data and label shift problems effectively. Extensive experiments on public benchmarks demonstrate the superiority of the proposed approach over a series of competitive baselines.
[ "Shi, Yuanjun", "Wu, Linzhi", "Shao, Minglai" ]
Adaptive End-to-End Metric Learning for Zero-Shot Cross-Domain Slot Filling
emnlp-main.387
2310.15294
[ "https://github.com/switchsyj/adae2ml-xsf" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.388.bib
https://aclanthology.org/2023.emnlp-main.388/
@inproceedings{imperial-kochmar-2023-basahacorpus, title = "{B}asaha{C}orpus: An Expanded Linguistic Resource for Readability Assessment in {C}entral {P}hilippine Languages", author = "Imperial, Joseph Marvin and Kochmar, Ekaterina", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.388", doi = "10.18653/v1/2023.emnlp-main.388", pages = "6302--6309", abstract = "Current research on automatic readability assessment (ARA) has focused on improving the performance of models in high-resource languages such as English. In this work, we introduce and release BasahaCorpus as part of an initiative aimed at expanding available corpora and baseline models for readability assessment in lower resource languages in the Philippines. We compiled a corpus of short fictional narratives written in Hiligaynon, Minasbate, Karay-a, and Rinconada{---}languages belonging to the Central Philippine family tree subgroup{---}to train ARA models using surface-level, syllable-pattern, and n-gram overlap features. We also propose a new hierarchical cross-lingual modeling approach that takes advantage of a language{'}s placement in the family tree to increase the amount of available training data. Our study yields encouraging results that support previous work showcasing the efficacy of cross-lingual models in low-resource settings, as well as similarities in highly informative linguistic features for mutually intelligible languages.", }
Current research on automatic readability assessment (ARA) has focused on improving the performance of models in high-resource languages such as English. In this work, we introduce and release BasahaCorpus as part of an initiative aimed at expanding available corpora and baseline models for readability assessment in lower resource languages in the Philippines. We compiled a corpus of short fictional narratives written in Hiligaynon, Minasbate, Karay-a, and Rinconada{---}languages belonging to the Central Philippine family tree subgroup{---}to train ARA models using surface-level, syllable-pattern, and n-gram overlap features. We also propose a new hierarchical cross-lingual modeling approach that takes advantage of a language{'}s placement in the family tree to increase the amount of available training data. Our study yields encouraging results that support previous work showcasing the efficacy of cross-lingual models in low-resource settings, as well as similarities in highly informative linguistic features for mutually intelligible languages.
[ "Imperial, Joseph Marvin", "Kochmar, Ekaterina" ]
BasahaCorpus: An Expanded Linguistic Resource for Readability Assessment in Central Philippine Languages
emnlp-main.388
2310.11584
[ "https://github.com/imperialite/basahacorpus-hierarchicalcrosslingualara" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.389.bib
https://aclanthology.org/2023.emnlp-main.389/
@inproceedings{ghosal-etal-2023-retag, title = "{R}e{TAG}: Reasoning Aware Table to Analytic Text Generation", author = "Ghosal, Deepanway and Nema, Preksha and Raghuveer, Aravindan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.389", doi = "10.18653/v1/2023.emnlp-main.389", pages = "6310--6324", abstract = "The task of table summarization involves generating text that both succinctly and accurately represents the table or a specific set of highlighted cells within a table. While significant progress has been made in table to text generation techniques, models still mostly generate descriptive summaries, which reiterates the information contained within the table in sentences. Through analysis of popular table to text benchmarks (ToTTo (Parikh et al., 2020 and InfoTabs (Gupta et al., 2020) we observe that in order to generate the ideal summary, multiple types of reasoning is needed coupled with access to knowledge beyond the scope of the table. To address this gap, we propose ReTAG, a table and reasoning aware model that uses vector-quantization to infuse different types of analytical reasoning into the output. ReTAG achieves 2.2{\%}, 2.9{\%} improvement on the PARENT metric in the relevant slice of ToTTo and InfoTabs for the table to text generation task over state of the art baselines. Through human evaluation, we observe that output from ReTAG is upto 12{\%} more faithful and analytical compared to a strong table-aware model. To the best of our knowledge, ReTAG is the first model that can controllably use multiple reasoning methods within a structure-aware sequence to sequence model to surpass state of the art performance in multiple table to text tasks. We extend (and open source 35.6K analytical, 55.9k descriptive instances) the ToTTo, InfoTabs datasets with the reasoning categories used in each reference sentences.", }
The task of table summarization involves generating text that both succinctly and accurately represents the table or a specific set of highlighted cells within a table. While significant progress has been made in table to text generation techniques, models still mostly generate descriptive summaries, which reiterates the information contained within the table in sentences. Through analysis of popular table to text benchmarks (ToTTo (Parikh et al., 2020 and InfoTabs (Gupta et al., 2020) we observe that in order to generate the ideal summary, multiple types of reasoning is needed coupled with access to knowledge beyond the scope of the table. To address this gap, we propose ReTAG, a table and reasoning aware model that uses vector-quantization to infuse different types of analytical reasoning into the output. ReTAG achieves 2.2{\%}, 2.9{\%} improvement on the PARENT metric in the relevant slice of ToTTo and InfoTabs for the table to text generation task over state of the art baselines. Through human evaluation, we observe that output from ReTAG is upto 12{\%} more faithful and analytical compared to a strong table-aware model. To the best of our knowledge, ReTAG is the first model that can controllably use multiple reasoning methods within a structure-aware sequence to sequence model to surpass state of the art performance in multiple table to text tasks. We extend (and open source 35.6K analytical, 55.9k descriptive instances) the ToTTo, InfoTabs datasets with the reasoning categories used in each reference sentences.
[ "Ghosal, Deepanway", "Nema, Preksha", "Raghuveer, Aravindan" ]
ReTAG: Reasoning Aware Table to Analytic Text Generation
emnlp-main.389
2305.11826
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.390.bib
https://aclanthology.org/2023.emnlp-main.390/
@inproceedings{chen-etal-2023-beyond, title = "Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators", author = "Chen, Liang and Deng, Yang and Bian, Yatao and Qin, Zeyu and Wu, Bingzhe and Chua, Tat-Seng and Wong, Kam-Fai", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.390", doi = "10.18653/v1/2023.emnlp-main.390", pages = "6325--6341", abstract = "Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks when being prompted to generate world knowledge. However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge. In light of this, we introduce CONNER, a COmpreheNsive kNowledge Evaluation fRamework, designed to systematically and automatically evaluate generated knowledge from six important perspectives {--} Factuality, Relevance, Coherence, Informativeness, Helpfulness and Validity. We conduct an extensive empirical analysis of the generated knowledge from three different types of LLMs on two widely studied knowledge-intensive tasks, i.e., open-domain question answering and knowledge-grounded dialogue. Surprisingly, our study reveals that the factuality of generated knowledge, even if lower, does not significantly hinder downstream tasks. Instead, the relevance and coherence of the outputs are more important than small factual mistakes. Further, we show how to use CONNER to improve knowledge-intensive tasks by designing two strategies: Prompt Engineering and Knowledge Selection. Our evaluation code and LLM-generated knowledge with human annotations will be released to facilitate future research.", }
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks when being prompted to generate world knowledge. However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge. In light of this, we introduce CONNER, a COmpreheNsive kNowledge Evaluation fRamework, designed to systematically and automatically evaluate generated knowledge from six important perspectives {--} Factuality, Relevance, Coherence, Informativeness, Helpfulness and Validity. We conduct an extensive empirical analysis of the generated knowledge from three different types of LLMs on two widely studied knowledge-intensive tasks, i.e., open-domain question answering and knowledge-grounded dialogue. Surprisingly, our study reveals that the factuality of generated knowledge, even if lower, does not significantly hinder downstream tasks. Instead, the relevance and coherence of the outputs are more important than small factual mistakes. Further, we show how to use CONNER to improve knowledge-intensive tasks by designing two strategies: Prompt Engineering and Knowledge Selection. Our evaluation code and LLM-generated knowledge with human annotations will be released to facilitate future research.
[ "Chen, Liang", "Deng, Yang", "Bian, Yatao", "Qin, Zeyu", "Wu, Bingzhe", "Chua, Tat-Seng", "Wong, Kam-Fai" ]
Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators
emnlp-main.390
2310.07289
[ "https://github.com/chanliang/conner" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.391.bib
https://aclanthology.org/2023.emnlp-main.391/
@inproceedings{li-etal-2023-compressing, title = "Compressing Context to Enhance Inference Efficiency of Large Language Models", author = "Li, Yucheng and Dong, Bo and Guerin, Frank and Lin, Chenghua", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.391", doi = "10.18653/v1/2023.emnlp-main.391", pages = "6342--6353", abstract = "Large language models (LLMs) achieved remarkable performance across various tasks. However, they face challenges in managing long documents and extended conversations, due to significantly increased computational requirements, both in memory and inference time, and potential context truncation when the input exceeds the LLM{'}s fixed context length. This paper proposes a method called \textit{Selective Context} that enhances the inference efficiency of LLMs by identifying and pruning redundancy in the input context to make the input more compact. We test our approach using common data sources requiring long context processing: arXiv papers, news articles, and long conversations, on tasks of summarisation, question answering, and response generation. Experimental results show that Selective Context significantly reduces memory cost and decreases generation latency while maintaining comparable performance compared to that achieved when full context is used. Specifically, we achieve a 50{\%} reduction in context cost, resulting in a 36{\%} reduction in inference memory usage and a 32{\%} reduction in inference time, while observing only a minor drop of .023 in BERTscore and .038 in faithfulness on four downstream applications, indicating that our method strikes a good balance between efficiency and performance.", }
Large language models (LLMs) achieved remarkable performance across various tasks. However, they face challenges in managing long documents and extended conversations, due to significantly increased computational requirements, both in memory and inference time, and potential context truncation when the input exceeds the LLM{'}s fixed context length. This paper proposes a method called \textit{Selective Context} that enhances the inference efficiency of LLMs by identifying and pruning redundancy in the input context to make the input more compact. We test our approach using common data sources requiring long context processing: arXiv papers, news articles, and long conversations, on tasks of summarisation, question answering, and response generation. Experimental results show that Selective Context significantly reduces memory cost and decreases generation latency while maintaining comparable performance compared to that achieved when full context is used. Specifically, we achieve a 50{\%} reduction in context cost, resulting in a 36{\%} reduction in inference memory usage and a 32{\%} reduction in inference time, while observing only a minor drop of .023 in BERTscore and .038 in faithfulness on four downstream applications, indicating that our method strikes a good balance between efficiency and performance.
[ "Li, Yucheng", "Dong, Bo", "Guerin, Frank", "Lin, Chenghua" ]
Compressing Context to Enhance Inference Efficiency of Large Language Models
emnlp-main.391
2310.06201
[ "https://github.com/liyucheng09/selective_context" ]
https://huggingface.co/papers/2310.06201
0
0
0
4
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.392.bib
https://aclanthology.org/2023.emnlp-main.392/
@inproceedings{li-qiu-2023-mot, title = "{M}o{T}: Memory-of-Thought Enables {C}hat{GPT} to Self-Improve", author = "Li, Xiaonan and Qiu, Xipeng", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.392", doi = "10.18653/v1/2023.emnlp-main.392", pages = "6354--6374", abstract = "Large Language Models (LLMs) have shown impressive abilities on various tasks. However, fundamentally improving them depends on high-quality datasets or computationally expensive fine-tuning. On the contrary, humans can easily improve themselves by self-thinking and memory, without external resources. In this paper, we propose a framework, **MoT**, to let the LLM self-improve through **M**emory **o**f **T**houghts, without annotated datasets and parameter updates. Specifically, MoT is divided into two stages: 1. before the test stage, the LLM pre-thinks on the unlabeled dataset and saves the high-confidence thoughts as external memory; 2. During the test stage, given a test question, the LLM recalls relevant memory to help itself reason and answer it. Experimental results show that MoT can help ChatGPT significantly improve its abilities in arithmetic reasoning, commonsense reasoning, factual reasoning, and natural language inference. Further analyses show that each component contributes critically to the improvements and MoT can lead to consistent improvements across various CoT methods and LLMs.", }
Large Language Models (LLMs) have shown impressive abilities on various tasks. However, fundamentally improving them depends on high-quality datasets or computationally expensive fine-tuning. On the contrary, humans can easily improve themselves by self-thinking and memory, without external resources. In this paper, we propose a framework, **MoT**, to let the LLM self-improve through **M**emory **o**f **T**houghts, without annotated datasets and parameter updates. Specifically, MoT is divided into two stages: 1. before the test stage, the LLM pre-thinks on the unlabeled dataset and saves the high-confidence thoughts as external memory; 2. During the test stage, given a test question, the LLM recalls relevant memory to help itself reason and answer it. Experimental results show that MoT can help ChatGPT significantly improve its abilities in arithmetic reasoning, commonsense reasoning, factual reasoning, and natural language inference. Further analyses show that each component contributes critically to the improvements and MoT can lead to consistent improvements across various CoT methods and LLMs.
[ "Li, Xiaonan", "Qiu, Xipeng" ]
MoT: Memory-of-Thought Enables ChatGPT to Self-Improve
emnlp-main.392
2305.05181
[ "https://github.com/leesureman/mot" ]
https://huggingface.co/papers/2305.05181
0
0
0
2
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.393.bib
https://aclanthology.org/2023.emnlp-main.393/
@inproceedings{gomez-rodriguez-etal-2023-4, title = "4 and 7-bit Labeling for Projective and Non-Projective Dependency Trees", author = "G{\'o}mez-Rodr{\'\i}guez, Carlos and Roca, Diego and Vilares, David", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.393", doi = "10.18653/v1/2023.emnlp-main.393", pages = "6375--6384", abstract = "We introduce an encoding for parsing as sequence labeling that can represent any projective dependency tree as a sequence of 4-bit labels, one per word. The bits in each word{'}s label represent (1) whether it is a right or left dependent, (2) whether it is the outermost (left/right) dependent of its parent, (3) whether it has any left children and (4) whether it has any right children. We show that this provides an injective mapping from trees to labels that can be encoded and decoded in linear time. We then define a 7-bit extension that represents an extra plane of arcs, extending the coverage to almost full non-projectivity (over 99.9{\%} empirical arc coverage). Results on a set of diverse treebanks show that our 7-bit encoding obtains substantial accuracy gains over the previously best-performing sequence labeling encodings.", }
We introduce an encoding for parsing as sequence labeling that can represent any projective dependency tree as a sequence of 4-bit labels, one per word. The bits in each word{'}s label represent (1) whether it is a right or left dependent, (2) whether it is the outermost (left/right) dependent of its parent, (3) whether it has any left children and (4) whether it has any right children. We show that this provides an injective mapping from trees to labels that can be encoded and decoded in linear time. We then define a 7-bit extension that represents an extra plane of arcs, extending the coverage to almost full non-projectivity (over 99.9{\%} empirical arc coverage). Results on a set of diverse treebanks show that our 7-bit encoding obtains substantial accuracy gains over the previously best-performing sequence labeling encodings.
[ "G{\\'o}mez-Rodr{\\'\\i}guez, Carlos", "Roca, Diego", "Vilares, David" ]
4 and 7-bit Labeling for Projective and Non-Projective Dependency Trees
emnlp-main.393
2310.14319
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.394.bib
https://aclanthology.org/2023.emnlp-main.394/
@inproceedings{yang-ettinger-2023-follow, title = "Can You Follow Me? Testing Situational Understanding for {C}hat{GPT}", author = "Yang, Chenghao and Ettinger, Allyson", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.394", doi = "10.18653/v1/2023.emnlp-main.394", pages = "6385--6398", abstract = "Understanding sentence meanings and updating information states appropriately across time{---}what we call {``}situational understanding{''} (SU){---}is a critical ability for human-like AI agents. SU is essential in particular for chat models, such as ChatGPT, to enable consistent, coherent, and effective dialogue between humans and AI. Previous works have identified certain SU limitations in non-chatbot Large Language models (LLMs), but the extent and causes of these limitations are not well understood, and capabilities of current chat-based models in this domain have not been explored. In this work we tackle these questions, proposing a novel synthetic environment for SU testing which allows us to do controlled and systematic testing of SU in chat-oriented models, through assessment of models{'} ability to track and enumerate environment states. Our environment also allows for close analysis of dynamics of model performance, to better understand underlying causes for performance patterns. We apply our test to ChatGPT, the state-of-the-art chatbot, and find that despite the fundamental simplicity of the task, the model{'}s performance reflects an inability to retain correct environment states across time. Our follow-up analyses suggest that performance degradation is largely because ChatGPT has non-persistent in-context memory (although it can access the full dialogue history) and it is susceptible to hallucinated updates{---}including updates that artificially inflate accuracies. Our findings suggest overall that ChatGPT is not currently equipped for robust tracking of situation states, and that trust in the impressive dialogue performance of ChatGPT comes with risks. We release the codebase for reproducing our test environment, as well as all prompts and API responses from ChatGPT, at https://github.com/yangalan123/SituationalTesting.", }
Understanding sentence meanings and updating information states appropriately across time{---}what we call {``}situational understanding{''} (SU){---}is a critical ability for human-like AI agents. SU is essential in particular for chat models, such as ChatGPT, to enable consistent, coherent, and effective dialogue between humans and AI. Previous works have identified certain SU limitations in non-chatbot Large Language models (LLMs), but the extent and causes of these limitations are not well understood, and capabilities of current chat-based models in this domain have not been explored. In this work we tackle these questions, proposing a novel synthetic environment for SU testing which allows us to do controlled and systematic testing of SU in chat-oriented models, through assessment of models{'} ability to track and enumerate environment states. Our environment also allows for close analysis of dynamics of model performance, to better understand underlying causes for performance patterns. We apply our test to ChatGPT, the state-of-the-art chatbot, and find that despite the fundamental simplicity of the task, the model{'}s performance reflects an inability to retain correct environment states across time. Our follow-up analyses suggest that performance degradation is largely because ChatGPT has non-persistent in-context memory (although it can access the full dialogue history) and it is susceptible to hallucinated updates{---}including updates that artificially inflate accuracies. Our findings suggest overall that ChatGPT is not currently equipped for robust tracking of situation states, and that trust in the impressive dialogue performance of ChatGPT comes with risks. We release the codebase for reproducing our test environment, as well as all prompts and API responses from ChatGPT, at https://github.com/yangalan123/SituationalTesting.
[ "Yang, Chenghao", "Ettinger, Allyson" ]
Can You Follow Me? Testing Situational Understanding for ChatGPT
emnlp-main.394
null
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.395.bib
https://aclanthology.org/2023.emnlp-main.395/
@inproceedings{pelrine-etal-2023-towards, title = "Towards Reliable Misinformation Mitigation: Generalization, Uncertainty, and {GPT}-4", author = "Pelrine, Kellin and Imouza, Anne and Thibault, Camille and Reksoprodjo, Meilina and Gupta, Caleb and Christoph, Joel and Godbout, Jean-Fran{\c{c}}ois and Rabbany, Reihaneh", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.395", doi = "10.18653/v1/2023.emnlp-main.395", pages = "6399--6429", abstract = "Misinformation poses a critical societal challenge, and current approaches have yet to produce an effective solution. We propose focusing on generalization, uncertainty, and how to leverage recent large language models, in order to create more practical tools to evaluate information veracity in contexts where perfect classification is impossible. We first demonstrate that GPT-4 can outperform prior methods in multiple settings and languages. Next, we explore generalization, revealing that GPT-4 and RoBERTa-large exhibit differences in failure modes. Third, we propose techniques to handle uncertainty that can detect impossible examples and strongly improve outcomes. We also discuss results on other language models, temperature, prompting, versioning, explainability, and web retrieval, each one providing practical insights and directions for future research. Finally, we publish the LIAR-New dataset with novel paired English and French misinformation data and Possibility labels that indicate if there is sufficient context for veracity evaluation. Overall, this research lays the groundwork for future tools that can drive real-world progress to combat misinformation.", }
Misinformation poses a critical societal challenge, and current approaches have yet to produce an effective solution. We propose focusing on generalization, uncertainty, and how to leverage recent large language models, in order to create more practical tools to evaluate information veracity in contexts where perfect classification is impossible. We first demonstrate that GPT-4 can outperform prior methods in multiple settings and languages. Next, we explore generalization, revealing that GPT-4 and RoBERTa-large exhibit differences in failure modes. Third, we propose techniques to handle uncertainty that can detect impossible examples and strongly improve outcomes. We also discuss results on other language models, temperature, prompting, versioning, explainability, and web retrieval, each one providing practical insights and directions for future research. Finally, we publish the LIAR-New dataset with novel paired English and French misinformation data and Possibility labels that indicate if there is sufficient context for veracity evaluation. Overall, this research lays the groundwork for future tools that can drive real-world progress to combat misinformation.
[ "Pelrine, Kellin", "Imouza, Anne", "Thibault, Camille", "Reksoprodjo, Meilina", "Gupta, Caleb", "Christoph, Joel", "Godbout, Jean-Fran{\\c{c}}ois", "Rabbany, Reihaneh" ]
Towards Reliable Misinformation Mitigation: Generalization, Uncertainty, and GPT-4
emnlp-main.395
2305.14928
[ "https://github.com/complexdata-mila/mitigatemisinfo" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.emnlp-main.396.bib
https://aclanthology.org/2023.emnlp-main.396/
@inproceedings{alhafni-etal-2023-advancements, title = "Advancements in {A}rabic Grammatical Error Detection and Correction: An Empirical Investigation", author = "Alhafni, Bashar and Inoue, Go and Khairallah, Christian and Habash, Nizar", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.396", doi = "10.18653/v1/2023.emnlp-main.396", pages = "6430--6448", abstract = "Grammatical error correction (GEC) is a well-explored problem in English with many existing models and datasets. However, research on GEC in morphologically rich languages has been limited due to challenges such as data scarcity and language complexity. In this paper, we present the first results on Arabic GEC using two newly developed Transformer-based pretrained sequence-to-sequence models. We also define the task of multi-class Arabic grammatical error detection (GED) and present the first results on multi-class Arabic GED. We show that using GED information as auxiliary input in GEC models improves GEC performance across three datasets spanning different genres. Moreover, we also investigate the use of contextual morphological preprocessing in aiding GEC systems. Our models achieve SOTA results on two Arabic GEC shared task datasets and establish a strong benchmark on a recently created dataset. We make our code, data, and pretrained models publicly available.", }
Grammatical error correction (GEC) is a well-explored problem in English with many existing models and datasets. However, research on GEC in morphologically rich languages has been limited due to challenges such as data scarcity and language complexity. In this paper, we present the first results on Arabic GEC using two newly developed Transformer-based pretrained sequence-to-sequence models. We also define the task of multi-class Arabic grammatical error detection (GED) and present the first results on multi-class Arabic GED. We show that using GED information as auxiliary input in GEC models improves GEC performance across three datasets spanning different genres. Moreover, we also investigate the use of contextual morphological preprocessing in aiding GEC systems. Our models achieve SOTA results on two Arabic GEC shared task datasets and establish a strong benchmark on a recently created dataset. We make our code, data, and pretrained models publicly available.
[ "Alhafni, Bashar", "Inoue, Go", "Khairallah, Christian", "Habash, Nizar" ]
Advancements in Arabic Grammatical Error Detection and Correction: An Empirical Investigation
emnlp-main.396
2305.14734
[ "https://github.com/camel-lab/arabic-gec" ]
https://huggingface.co/papers/2305.14734
1
1
0
4
[ "CAMeL-Lab/arabart-qalb14-gec-ged-13", "CAMeL-Lab/camelbert-msa-zaebuc-ged-13", "CAMeL-Lab/arabart-qalb15-gec-ged-13", "CAMeL-Lab/camelbert-msa-qalb15-ged-13", "CAMeL-Lab/camelbert-msa-qalb14-ged-13", "CAMeL-Lab/arabart-zaebuc-gec-ged-13" ]
[]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.397.bib
https://aclanthology.org/2023.emnlp-main.397/
@inproceedings{li-etal-2023-halueval, title = "{H}alu{E}val: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models", author = "Li, Junyi and Cheng, Xiaoxue and Zhao, Xin and Nie, Jian-Yun and Wen, Ji-Rong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.397", doi = "10.18653/v1/2023.emnlp-main.397", pages = "6449--6464", abstract = "Large language models (LLMs), such as ChatGPT, are prone to generate hallucinations, i.e., content that conflicts with the source or cannot be verified by the factual knowledge. To understand what types of content and to which extent LLMs are apt to hallucinate, we introduce the Hallucination Evaluation for Large Language Models (HaluEval) benchmark, a large collection of generated and human-annotated hallucinated samples for evaluating the performance of LLMs in recognizing hallucination. To generate these samples, we propose a ChatGPT-based two-step framework, i.e., sampling-then-filtering. Besides, we also hire some human labelers to annotate the hallucinations in ChatGPT responses. The empirical results suggest that ChatGPT is likely to generate hallucinated content in specific topics by fabricating unverifiable information (i.e., about 19.5{\%} user queries). Moreover, existing LLMs face great challenges in recognizing the hallucinations in texts. While, our experiments also prove that the hallucination recognition can be improved by providing external knowledge or adding reasoning steps.", }
Large language models (LLMs), such as ChatGPT, are prone to generate hallucinations, i.e., content that conflicts with the source or cannot be verified by the factual knowledge. To understand what types of content and to which extent LLMs are apt to hallucinate, we introduce the Hallucination Evaluation for Large Language Models (HaluEval) benchmark, a large collection of generated and human-annotated hallucinated samples for evaluating the performance of LLMs in recognizing hallucination. To generate these samples, we propose a ChatGPT-based two-step framework, i.e., sampling-then-filtering. Besides, we also hire some human labelers to annotate the hallucinations in ChatGPT responses. The empirical results suggest that ChatGPT is likely to generate hallucinated content in specific topics by fabricating unverifiable information (i.e., about 19.5{\%} user queries). Moreover, existing LLMs face great challenges in recognizing the hallucinations in texts. While, our experiments also prove that the hallucination recognition can be improved by providing external knowledge or adding reasoning steps.
[ "Li, Junyi", "Cheng, Xiaoxue", "Zhao, Xin", "Nie, Jian-Yun", "Wen, Ji-Rong" ]
HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models
emnlp-main.397
2305.11747
[ "https://github.com/rucaibox/helma" ]
https://huggingface.co/papers/2305.11747
0
0
0
5
[ "grounded-ai/phi3-hallucination-judge", "grounded-ai/phi3-hallucination-judge-merge" ]
[ "achandlr/FactualConsistencyScoresTextSummarization" ]
[]
1
Oral
https://aclanthology.org/2023.emnlp-main.398.bib
https://aclanthology.org/2023.emnlp-main.398/
@inproceedings{gao-etal-2023-enabling, title = "Enabling Large Language Models to Generate Text with Citations", author = "Gao, Tianyu and Yen, Howard and Yu, Jiatong and Chen, Danqi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.398", doi = "10.18653/v1/2023.emnlp-main.398", pages = "6465--6488", abstract = "Large language models (LLMs) have emerged as a widely-used tool for information seeking, but their generated outputs are prone to hallucination. In this work, our aim is to allow LLMs to generate text with citations, improving their factual correctness and verifiability. Existing work mainly relies on commercial search engines and human evaluation, making it challenging to reproduce and compare different modeling approaches. We propose ALCE, the first benchmark for Automatic LLMs{'} Citation Evaluation. ALCE collects a diverse set of questions and retrieval corpora and requires building end-to-end systems to retrieve supporting evidence and generate answers with citations. We develop automatic metrics along three dimensions{---}fluency, correctness, and citation quality{---}and demonstrate their strong correlation with human judgements. Our experiments with state-of-the-art LLMs and novel prompting strategies show that current systems have considerable room for improvement{---}For example, on the ELI5 dataset, even the best models lack complete citation support 50{\%} of the time. Our analyses further highlight promising future directions, including developing better retrievers, advancing long-context LLMs, and improving the ability to synthesize information from multiple sources.", }
Large language models (LLMs) have emerged as a widely-used tool for information seeking, but their generated outputs are prone to hallucination. In this work, our aim is to allow LLMs to generate text with citations, improving their factual correctness and verifiability. Existing work mainly relies on commercial search engines and human evaluation, making it challenging to reproduce and compare different modeling approaches. We propose ALCE, the first benchmark for Automatic LLMs{'} Citation Evaluation. ALCE collects a diverse set of questions and retrieval corpora and requires building end-to-end systems to retrieve supporting evidence and generate answers with citations. We develop automatic metrics along three dimensions{---}fluency, correctness, and citation quality{---}and demonstrate their strong correlation with human judgements. Our experiments with state-of-the-art LLMs and novel prompting strategies show that current systems have considerable room for improvement{---}For example, on the ELI5 dataset, even the best models lack complete citation support 50{\%} of the time. Our analyses further highlight promising future directions, including developing better retrievers, advancing long-context LLMs, and improving the ability to synthesize information from multiple sources.
[ "Gao, Tianyu", "Yen, Howard", "Yu, Jiatong", "Chen, Danqi" ]
Enabling Large Language Models to Generate Text with Citations
emnlp-main.398
2305.14627
[ "https://github.com/princeton-nlp/alce" ]
https://huggingface.co/papers/2305.14627
0
0
0
4
[]
[]
[]
1
Poster
https://aclanthology.org/2023.emnlp-main.399.bib
https://aclanthology.org/2023.emnlp-main.399/
@inproceedings{artetxe-etal-2023-revisiting, title = "Revisiting Machine Translation for Cross-lingual Classification", author = "Artetxe, Mikel and Goswami, Vedanuj and Bhosale, Shruti and Fan, Angela and Zettlemoyer, Luke", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.399", doi = "10.18653/v1/2023.emnlp-main.399", pages = "6489--6499", abstract = "Machine Translation (MT) has been widely used for cross-lingual classification, either by translating the test set into English and running inference with a monolingual model (translate-test), or translating the training set into the target languages and finetuning a multilingual model (translate-train). However, most research in the area focuses on the multilingual models rather than the MT component. We show that, by using a stronger MT system and mitigating the mismatch between training on original text and running inference on machine translated text, translate-test can do substantially better than previously assumed. The optimal approach, however, is highly task dependent, as we identify various sources of cross-lingual transfer gap that affect different tasks and approaches differently. Our work calls into question the dominance of multilingual models for cross-lingual classification, and prompts to pay more attention to MT-based baselines.", }
Machine Translation (MT) has been widely used for cross-lingual classification, either by translating the test set into English and running inference with a monolingual model (translate-test), or translating the training set into the target languages and finetuning a multilingual model (translate-train). However, most research in the area focuses on the multilingual models rather than the MT component. We show that, by using a stronger MT system and mitigating the mismatch between training on original text and running inference on machine translated text, translate-test can do substantially better than previously assumed. The optimal approach, however, is highly task dependent, as we identify various sources of cross-lingual transfer gap that affect different tasks and approaches differently. Our work calls into question the dominance of multilingual models for cross-lingual classification, and prompts to pay more attention to MT-based baselines.
[ "Artetxe, Mikel", "Goswami, Vedanuj", "Bhosale, Shruti", "Fan, Angela", "Zettlemoyer, Luke" ]
Revisiting Machine Translation for Cross-lingual Classification
emnlp-main.399
2305.14240
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral
https://aclanthology.org/2023.emnlp-main.400.bib
https://aclanthology.org/2023.emnlp-main.400/
@inproceedings{deng-etal-2023-pre, title = "Pre-Trained Language Models Augmented with Synthetic Scanpaths for Natural Language Understanding", author = {Deng, Shuwen and Prasse, Paul and Reich, David and Scheffer, Tobias and J{\"a}ger, Lena}, editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.400", doi = "10.18653/v1/2023.emnlp-main.400", pages = "6500--6507", abstract = "Human gaze data offer cognitive information that reflects natural language comprehension. Indeed, augmenting language models with human scanpaths has proven beneficial for a range of NLP tasks, including language understanding. However, the applicability of this approach is hampered because the abundance of text corpora is contrasted by a scarcity of gaze data. Although models for the generation of human-like scanpaths during reading have been developed, the potential of synthetic gaze data across NLP tasks remains largely unexplored. We develop a model that integrates synthetic scanpath generation with a scanpath-augmented language model, eliminating the need for human gaze data. Since the model{'}s error gradient can be propagated throughout all parts of the model, the scanpath generator can be fine-tuned to downstream tasks. We find that the proposed model not only outperforms the underlying language model, but achieves a performance that is comparable to a language model augmented with real human gaze data. Our code is publicly available.", }
Human gaze data offer cognitive information that reflects natural language comprehension. Indeed, augmenting language models with human scanpaths has proven beneficial for a range of NLP tasks, including language understanding. However, the applicability of this approach is hampered because the abundance of text corpora is contrasted by a scarcity of gaze data. Although models for the generation of human-like scanpaths during reading have been developed, the potential of synthetic gaze data across NLP tasks remains largely unexplored. We develop a model that integrates synthetic scanpath generation with a scanpath-augmented language model, eliminating the need for human gaze data. Since the model{'}s error gradient can be propagated throughout all parts of the model, the scanpath generator can be fine-tuned to downstream tasks. We find that the proposed model not only outperforms the underlying language model, but achieves a performance that is comparable to a language model augmented with real human gaze data. Our code is publicly available.
[ "Deng, Shuwen", "Prasse, Paul", "Reich, David", "Scheffer, Tobias", "J{\\\"a}ger, Lena" ]
Pre-Trained Language Models Augmented with Synthetic Scanpaths for Natural Language Understanding
emnlp-main.400
2310.14676
[ "https://github.com/aeye-lab/emnlp-syntheticscanpaths-nlu-pretrainedlm" ]
-1
-1
-1
-1
[]
[]
[]
0
Oral