bibtex_url
stringlengths 41
50
| bibtext
stringlengths 693
2.88k
| abstract
stringlengths 0
2k
| authors
sequencelengths 1
45
| title
stringlengths 21
199
| id
stringlengths 7
16
| type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringlengths 0
40
| n_linked_authors
int64 -1
28
| upvotes
int64 -1
255
| num_comments
int64 -1
23
| n_authors
int64 -1
35
| proceedings
stringlengths 38
47
| Models
sequencelengths 0
57
| Datasets
sequencelengths 0
19
| Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2024.acl-long.1.bib | @inproceedings{zhang-etal-2024-quantized,
title = "Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models",
author = "Zhang, Zhengxin and
Zhao, Dan and
Miao, Xupeng and
Oliaro, Gabriele and
Zhang, Zhihao and
Li, Qing and
Jiang, Yong and
Jia, Zhihao",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.1",
pages = "1--17",
abstract = "Finetuning large language models (LLMs) has been empirically effective on a variety of downstream tasks. Existing approaches to finetuning an LLM either focus on parameter-efficient finetuning, which only updates a small number of trainable parameters, or attempt to reduce the memory footprint during the training phase of the finetuning. Typically, the memory footprint during finetuning stems from three contributors: model weights, optimizer states, and intermediate activations. However, existing works still require considerable memory, and none can simultaneously mitigate the memory footprint of all three sources. In this paper, we present quantized side tuing (QST), which enables memory-efficient and fast finetuning of LLMs by operating through a dual-stage process. First, QST quantizes an LLM{'}s model weights into 4-bit to reduce the memory footprint of the LLM{'}s original weights. Second, QST introduces a side network separated from the LLM, which utilizes the hidden states of the LLM to make task-specific predictions. Using a separate side network avoids performing back-propagation through the LLM, thus reducing the memory requirement of the intermediate activations. Finally, QST leverages several low-rank adaptors and gradient-free downsample modules to significantly reduce the trainable parameters, so as to save the memory footprint of the optimizer states. Experiments show that QST can reduce the total memory footprint by up to 2.3{\mbox{$\times$}} and speed up the finetuning process by up to 3$\times$ while achieving competent performance compared with the state-of-the-art. When it comes to full finetuning, QST can reduce the total memory footprint up to 7$\times$.",
}
| Finetuning large language models (LLMs) has been empirically effective on a variety of downstream tasks. Existing approaches to finetuning an LLM either focus on parameter-efficient finetuning, which only updates a small number of trainable parameters, or attempt to reduce the memory footprint during the training phase of the finetuning. Typically, the memory footprint during finetuning stems from three contributors: model weights, optimizer states, and intermediate activations. However, existing works still require considerable memory, and none can simultaneously mitigate the memory footprint of all three sources. In this paper, we present quantized side tuing (QST), which enables memory-efficient and fast finetuning of LLMs by operating through a dual-stage process. First, QST quantizes an LLM{'}s model weights into 4-bit to reduce the memory footprint of the LLM{'}s original weights. Second, QST introduces a side network separated from the LLM, which utilizes the hidden states of the LLM to make task-specific predictions. Using a separate side network avoids performing back-propagation through the LLM, thus reducing the memory requirement of the intermediate activations. Finally, QST leverages several low-rank adaptors and gradient-free downsample modules to significantly reduce the trainable parameters, so as to save the memory footprint of the optimizer states. Experiments show that QST can reduce the total memory footprint by up to 2.3{\mbox{$\times$}} and speed up the finetuning process by up to 3$\times$ while achieving competent performance compared with the state-of-the-art. When it comes to full finetuning, QST can reduce the total memory footprint up to 7$\times$. | [
"Zhang, Zhengxin",
"Zhao, Dan",
"Miao, Xupeng",
"Oliaro, Gabriele",
"Zhang, Zhihao",
"Li, Qing",
"Jiang, Yong",
"Jia, Zhihao"
] | Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models | acl-long.1 | Oral | 2401.07159 | [
"https://github.com/youarespecialtome/qst"
] | https://huggingface.co/papers/2401.07159 | 2 | 0 | 0 | 7 | https://aclanthology.org/2024.acl-long.1/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.2.bib | @inproceedings{zhang-etal-2024-unsupervised,
title = "Unsupervised Multimodal Clustering for Semantics Discovery in Multimodal Utterances",
author = "Zhang, Hanlei and
Xu, Hua and
Long, Fei and
Wang, Xin and
Gao, Kai",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.2",
pages = "18--35",
abstract = "Discovering the semantics of multimodal utterances is essential for understanding human language and enhancing human-machine interactions. Existing methods manifest limitations in leveraging nonverbal information for discerning complex semantics in unsupervised scenarios. This paper introduces a novel unsupervised multimodal clustering method (UMC), making a pioneering contribution to this field. UMC introduces a unique approach to constructing augmentation views for multimodal data, which are then used to perform pre-training to establish well-initialized representations for subsequent clustering. An innovative strategy is proposed to dynamically select high-quality samples as guidance for representation learning, gauged by the density of each sample{'}s nearest neighbors. Besides, it is equipped to automatically determine the optimal value for the top-$K$ parameter in each cluster to refine sample selection. Finally, both high- and low-quality samples are used to learn representations conducive to effective clustering. We build baselines on benchmark multimodal intent and dialogue act datasets. UMC shows remarkable improvements of 2-6{\%} scores in clustering metrics over state-of-the-art methods, marking the first successful endeavor in this domain. The complete code and data are available at https://github.com/thuiar/UMC.",
}
| Discovering the semantics of multimodal utterances is essential for understanding human language and enhancing human-machine interactions. Existing methods manifest limitations in leveraging nonverbal information for discerning complex semantics in unsupervised scenarios. This paper introduces a novel unsupervised multimodal clustering method (UMC), making a pioneering contribution to this field. UMC introduces a unique approach to constructing augmentation views for multimodal data, which are then used to perform pre-training to establish well-initialized representations for subsequent clustering. An innovative strategy is proposed to dynamically select high-quality samples as guidance for representation learning, gauged by the density of each sample{'}s nearest neighbors. Besides, it is equipped to automatically determine the optimal value for the top-$K$ parameter in each cluster to refine sample selection. Finally, both high- and low-quality samples are used to learn representations conducive to effective clustering. We build baselines on benchmark multimodal intent and dialogue act datasets. UMC shows remarkable improvements of 2-6{\%} scores in clustering metrics over state-of-the-art methods, marking the first successful endeavor in this domain. The complete code and data are available at https://github.com/thuiar/UMC. | [
"Zhang, Hanlei",
"Xu, Hua",
"Long, Fei",
"Wang, Xin",
"Gao, Kai"
] | Unsupervised Multimodal Clustering for Semantics Discovery in Multimodal Utterances | acl-long.2 | Poster | 2405.12775 | [
"https://github.com/thuiar/umc"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.2/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.3.bib | @inproceedings{li-etal-2024-mage,
title = "{MAGE}: Machine-generated Text Detection in the Wild",
author = "Li, Yafu and
Li, Qintong and
Cui, Leyang and
Bi, Wei and
Wang, Zhilin and
Wang, Longyue and
Yang, Linyi and
Shi, Shuming and
Zhang, Yue",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.3",
pages = "36--53",
abstract = "Large language models (LLMs) have achieved human-level text generation, emphasizing the need for effective deepfake text detection to mitigate risks like the spread of fake news and plagiarism. Existing research has been constrained by evaluating detection methods o specific domains or particular language models. In practical scenarios, however, the detector faces texts from various domains or LLMs without knowing their sources. To this end, we build a comprehensive testbed by gathering texts from diverse human writings and deepfake texts generated by different LLMs. Empirical results on mainstream detection methods demonstrate the difficulties associated with detecting deepfake text in a wide-ranging testbed, particularly in out-of-distribution scenarios. Such difficulties align with the diminishing linguistic differences between the two text sources. Despite challenges, the top-performing detector can identify 84.12{\%} out-of-domain texts generated by a new LLM, indicating the feasibility for application scenarios.",
}
| Large language models (LLMs) have achieved human-level text generation, emphasizing the need for effective deepfake text detection to mitigate risks like the spread of fake news and plagiarism. Existing research has been constrained by evaluating detection methods o specific domains or particular language models. In practical scenarios, however, the detector faces texts from various domains or LLMs without knowing their sources. To this end, we build a comprehensive testbed by gathering texts from diverse human writings and deepfake texts generated by different LLMs. Empirical results on mainstream detection methods demonstrate the difficulties associated with detecting deepfake text in a wide-ranging testbed, particularly in out-of-distribution scenarios. Such difficulties align with the diminishing linguistic differences between the two text sources. Despite challenges, the top-performing detector can identify 84.12{\%} out-of-domain texts generated by a new LLM, indicating the feasibility for application scenarios. | [
"Li, Yafu",
"Li, Qintong",
"Cui, Leyang",
"Bi, Wei",
"Wang, Zhilin",
"Wang, Longyue",
"Yang, Linyi",
"Shi, Shuming",
"Zhang, Yue"
] | MAGE: Machine-generated Text Detection in the Wild | acl-long.3 | Poster | 2305.13242 | [
"https://github.com/yafuly/deepfaketextdetect"
] | https://huggingface.co/papers/2305.13242 | 2 | 0 | 1 | 8 | https://aclanthology.org/2024.acl-long.3/ | [
"yaful/MAGE"
] | [
"yaful/MAGE"
] | [
"yaful/DeepfakeTextDetect"
] | 1 |
https://aclanthology.org/2024.acl-long.4.bib | @inproceedings{li-etal-2024-privlm,
title = "{P}riv{LM}-Bench: A Multi-level Privacy Evaluation Benchmark for Language Models",
author = "Li, Haoran and
Guo, Dadi and
Li, Donghao and
Fan, Wei and
Hu, Qi and
Liu, Xin and
Chan, Chunkit and
Yao, Duanyi and
Yao, Yuan and
Song, Yangqiu",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.4",
pages = "54--73",
abstract = "The rapid development of language models (LMs) brings unprecedented accessibility and usage for both models and users. On the one hand, powerful LMs achieve state-of-the-art performance over numerous downstream NLP tasks. On the other hand, more and more attention is paid to unrestricted model accesses that may bring malicious privacy risks of data leakage. To address these issues, many recent works propose privacy-preserving language models (PPLMs) with differential privacy (DP). Unfortunately, different DP implementations make it challenging for a fair comparison among existing PPLMs. In this paper, we present PrivLM-Bench, a multi-perspective privacy evaluation benchmark to empirically and intuitively quantify the privacy leakage of LMs. Instead of only reporting DP parameters, PrivLM-Bench sheds light on the neglected inference data privacy during actual usage. PrivLM-Bench first clearly defines multi-faceted privacy objectives. Then, PrivLM-Bench constructs a unified pipeline to perform private fine-tuning. Lastly, PrivLM-Bench performs existing privacy attacks on LMs with pre-defined privacy objectives as the empirical evaluation results. The empirical attack results are used to fairly and intuitively evaluate the privacy leakage of various PPLMs. We conduct extensive experiments on three datasets of GLUE for mainstream LMs.",
}
| The rapid development of language models (LMs) brings unprecedented accessibility and usage for both models and users. On the one hand, powerful LMs achieve state-of-the-art performance over numerous downstream NLP tasks. On the other hand, more and more attention is paid to unrestricted model accesses that may bring malicious privacy risks of data leakage. To address these issues, many recent works propose privacy-preserving language models (PPLMs) with differential privacy (DP). Unfortunately, different DP implementations make it challenging for a fair comparison among existing PPLMs. In this paper, we present PrivLM-Bench, a multi-perspective privacy evaluation benchmark to empirically and intuitively quantify the privacy leakage of LMs. Instead of only reporting DP parameters, PrivLM-Bench sheds light on the neglected inference data privacy during actual usage. PrivLM-Bench first clearly defines multi-faceted privacy objectives. Then, PrivLM-Bench constructs a unified pipeline to perform private fine-tuning. Lastly, PrivLM-Bench performs existing privacy attacks on LMs with pre-defined privacy objectives as the empirical evaluation results. The empirical attack results are used to fairly and intuitively evaluate the privacy leakage of various PPLMs. We conduct extensive experiments on three datasets of GLUE for mainstream LMs. | [
"Li, Haoran",
"Guo, Dadi",
"Li, Donghao",
"Fan, Wei",
"Hu, Qi",
"Liu, Xin",
"Chan, Chunkit",
"Yao, Duanyi",
"Yao, Yuan",
"Song, Yangqiu"
] | PrivLM-Bench: A Multi-level Privacy Evaluation Benchmark for Language Models | acl-long.4 | Oral | 2311.04044 | [
"https://github.com/hkust-knowcomp/privlm-bench"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.4/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.5.bib | @inproceedings{hu-etal-2024-gentranslate,
title = "{G}en{T}ranslate: Large Language Models are Generative Multilingual Speech and Machine Translators",
author = "Hu, Yuchen and
Chen, Chen and
Yang, Chao-Han and
Li, Ruizhe and
Zhang, Dong and
Chen, Zhehuai and
Chng, EngSiong",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.5",
pages = "74--90",
abstract = "Recent advances in large language models (LLMs) have stepped forward the development of multilingual speech and machine translation by its reduced representation errors and incorporated external knowledge. However, both translation tasks typically utilize beam search decoding and top-1 hypothesis selection for inference. These techniques struggle to fully exploit the rich information in the diverse N-best hypotheses, making them less optimal for translation tasks that require a single, high-quality output sequence. In this paper, we propose a new generative paradigm for translation tasks, namely GenTranslate, which builds upon LLMs to generate better results from the diverse translation versions in N-best list. Leveraging the rich linguistic knowledge and strong reasoning abilities of LLMs, our new paradigm can integrate the diverse N-best candidates to generate a higher-quality translation result. Furthermore, to support LLM finetuning, we build and release a HypoTranslate dataset that contains over 592K hypotheses-translation pairs in 11 languages. Experiments on various speech and machine translation benchmarks (e.g., FLEURS, CoVoST-2, WMT) demonstrate that our GenTranslate significantly outperforms the state-of-the-art model.",
}
| Recent advances in large language models (LLMs) have stepped forward the development of multilingual speech and machine translation by its reduced representation errors and incorporated external knowledge. However, both translation tasks typically utilize beam search decoding and top-1 hypothesis selection for inference. These techniques struggle to fully exploit the rich information in the diverse N-best hypotheses, making them less optimal for translation tasks that require a single, high-quality output sequence. In this paper, we propose a new generative paradigm for translation tasks, namely GenTranslate, which builds upon LLMs to generate better results from the diverse translation versions in N-best list. Leveraging the rich linguistic knowledge and strong reasoning abilities of LLMs, our new paradigm can integrate the diverse N-best candidates to generate a higher-quality translation result. Furthermore, to support LLM finetuning, we build and release a HypoTranslate dataset that contains over 592K hypotheses-translation pairs in 11 languages. Experiments on various speech and machine translation benchmarks (e.g., FLEURS, CoVoST-2, WMT) demonstrate that our GenTranslate significantly outperforms the state-of-the-art model. | [
"Hu, Yuchen",
"Chen, Chen",
"Yang, Chao-Han",
"Li, Ruizhe",
"Zhang, Dong",
"Chen, Zhehuai",
"Chng, EngSiong"
] | GenTranslate: Large Language Models are Generative Multilingual Speech and Machine Translators | acl-long.5 | Oral | 2402.06894 | [
"https://github.com/yuchen005/gentranslate"
] | https://huggingface.co/papers/2402.06894 | 1 | 0 | 0 | 7 | https://aclanthology.org/2024.acl-long.5/ | [
"PeacefulData/GenTranslate"
] | [
"PeacefulData/HypoTranslate"
] | [] | 1 |
https://aclanthology.org/2024.acl-long.6.bib | @inproceedings{xu-etal-2024-exploring,
title = "Exploring Chain-of-Thought for Multi-modal Metaphor Detection",
author = "Xu, Yanzhi and
Hua, Yueying and
Li, Shichen and
Wang, Zhongqing",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.6",
pages = "91--101",
abstract = "Metaphors are commonly found in advertising and internet memes. However, the free form of internet memes often leads to a lack of high-quality textual data. Metaphor detection demands a deep interpretation of both textual and visual elements, requiring extensive common-sense knowledge, which poses a challenge to language models. To address these challenges, we propose a compact framework called C4MMD, which utilizes a \textbf{C}hain-of-Thought(CoT) method \textbf{for} \textbf{M}ulti-modal \textbf{M}etaphor \textbf{D}etection. Specifically, our approach designs a three-step process inspired by CoT that extracts and integrates knowledge from Multi-modal Large Language Models(MLLMs) into smaller ones. We also developed a modality fusion architecture to transform knowledge from large models into metaphor features, supplemented by auxiliary tasks to improve model performance. Experimental results on the MET-MEME dataset demonstrate that our method not only effectively enhances the metaphor detection capabilities of small models but also outperforms existing models. To our knowledge, this is the first systematic study leveraging MLLMs in metaphor detection tasks. The code for our method is publicly available at \url{https://github.com/xyz189411yt/C4MMD}.",
}
| Metaphors are commonly found in advertising and internet memes. However, the free form of internet memes often leads to a lack of high-quality textual data. Metaphor detection demands a deep interpretation of both textual and visual elements, requiring extensive common-sense knowledge, which poses a challenge to language models. To address these challenges, we propose a compact framework called C4MMD, which utilizes a \textbf{C}hain-of-Thought(CoT) method \textbf{for} \textbf{M}ulti-modal \textbf{M}etaphor \textbf{D}etection. Specifically, our approach designs a three-step process inspired by CoT that extracts and integrates knowledge from Multi-modal Large Language Models(MLLMs) into smaller ones. We also developed a modality fusion architecture to transform knowledge from large models into metaphor features, supplemented by auxiliary tasks to improve model performance. Experimental results on the MET-MEME dataset demonstrate that our method not only effectively enhances the metaphor detection capabilities of small models but also outperforms existing models. To our knowledge, this is the first systematic study leveraging MLLMs in metaphor detection tasks. The code for our method is publicly available at \url{https://github.com/xyz189411yt/C4MMD}. | [
"Xu, Yanzhi",
"Hua, Yueying",
"Li, Shichen",
"Wang, Zhongqing"
] | Exploring Chain-of-Thought for Multi-modal Metaphor Detection | acl-long.6 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.6/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.7.bib | @inproceedings{du-etal-2024-bitdistiller,
title = "{B}it{D}istiller: Unleashing the Potential of Sub-4-Bit {LLM}s via Self-Distillation",
author = "Du, DaYou and
Zhang, Yijia and
Cao, Shijie and
Guo, Jiaqi and
Cao, Ting and
Chu, Xiaowen and
Xu, Ningyi",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.7",
pages = "102--116",
abstract = "The upscaling of Large Language Models (LLMs) has yielded impressive advances in natural language processing, yet it also poses significant deployment challenges. Weight quantization has emerged as a widely embraced solution to reduce memory and computational demands. This paper introduces BitDistiller, a framework that synergizes Quantization-Aware Training (QAT) with Knowledge Distillation (KD) to boost the performance of LLMs at ultra-low precisions (sub-4-bit). Specifically, BitDistiller first incorporates a tailored asymmetric quantization and clipping technique to maximally preserve the fidelity of quantized weights, and then proposes a novel Confidence-Aware Kullback-Leibler Divergence (CAKLD) objective, which is employed in a self-distillation manner to enable faster convergence and superior model performance. Empirical evaluations demonstrate that BitDistiller significantly surpasses existing methods in both 3-bit and 2-bit configurations on general language understanding and complex reasoning benchmarks. Notably, BitDistiller is shown to be more cost-effective, demanding fewer data and training resources. The code is available at https://github.com/DD-DuDa/BitDistiller.",
}
| The upscaling of Large Language Models (LLMs) has yielded impressive advances in natural language processing, yet it also poses significant deployment challenges. Weight quantization has emerged as a widely embraced solution to reduce memory and computational demands. This paper introduces BitDistiller, a framework that synergizes Quantization-Aware Training (QAT) with Knowledge Distillation (KD) to boost the performance of LLMs at ultra-low precisions (sub-4-bit). Specifically, BitDistiller first incorporates a tailored asymmetric quantization and clipping technique to maximally preserve the fidelity of quantized weights, and then proposes a novel Confidence-Aware Kullback-Leibler Divergence (CAKLD) objective, which is employed in a self-distillation manner to enable faster convergence and superior model performance. Empirical evaluations demonstrate that BitDistiller significantly surpasses existing methods in both 3-bit and 2-bit configurations on general language understanding and complex reasoning benchmarks. Notably, BitDistiller is shown to be more cost-effective, demanding fewer data and training resources. The code is available at https://github.com/DD-DuDa/BitDistiller. | [
"Du, DaYou",
"Zhang, Yijia",
"Cao, Shijie",
"Guo, Jiaqi",
"Cao, Ting",
"Chu, Xiaowen",
"Xu, Ningyi"
] | BitDistiller: Unleashing the Potential of Sub-4-Bit LLMs via Self-Distillation | acl-long.7 | Poster | 2402.10631 | [
"https://github.com/dd-duda/bitdistiller"
] | https://huggingface.co/papers/2402.10631 | 1 | 1 | 0 | 7 | https://aclanthology.org/2024.acl-long.7/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.8.bib | @inproceedings{chen-etal-2024-unified,
title = "A Unified Temporal Knowledge Graph Reasoning Model Towards Interpolation and Extrapolation",
author = "Chen, Kai and
Wang, Ye and
Li, Yitong and
Li, Aiping and
Yu, Han and
Song, Xin",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.8",
pages = "117--132",
abstract = "Temporal knowledge graph (TKG) reasoning has two settings: interpolation reasoning and extrapolation reasoning. Both of them draw plenty of research interest and have great significance. Methods of the former de-emphasize the temporal correlations among facts sequences, while methods of the latter require strict chronological order of knowledge and ignore inferring clues provided by missing facts of the past. These limit the practicability of TKG applications as almost all of the existing TKG reasoning methods are designed specifically to address either one setting. To this end, this paper proposes an original Temporal PAth-based Reasoning (TPAR) model for both the interpolation and extrapolation reasoning settings. TPAR performs a neural-driven symbolic reasoning fashion that is robust to ambiguous and noisy temporal data, and with fine interpretability as well. Comprehensive experiments show that TPAR outperforms SOTA methods on the link prediction task for both the interpolation and the extrapolation settings. A novel pipeline experimental setting is designed to evaluate the performances of SOTA combinations and the proposed TPAR towards interpolation and extrapolation reasoning. And more diverse experiments are conducted to show the robustness and interpretability of TPAR.",
}
| Temporal knowledge graph (TKG) reasoning has two settings: interpolation reasoning and extrapolation reasoning. Both of them draw plenty of research interest and have great significance. Methods of the former de-emphasize the temporal correlations among facts sequences, while methods of the latter require strict chronological order of knowledge and ignore inferring clues provided by missing facts of the past. These limit the practicability of TKG applications as almost all of the existing TKG reasoning methods are designed specifically to address either one setting. To this end, this paper proposes an original Temporal PAth-based Reasoning (TPAR) model for both the interpolation and extrapolation reasoning settings. TPAR performs a neural-driven symbolic reasoning fashion that is robust to ambiguous and noisy temporal data, and with fine interpretability as well. Comprehensive experiments show that TPAR outperforms SOTA methods on the link prediction task for both the interpolation and the extrapolation settings. A novel pipeline experimental setting is designed to evaluate the performances of SOTA combinations and the proposed TPAR towards interpolation and extrapolation reasoning. And more diverse experiments are conducted to show the robustness and interpretability of TPAR. | [
"Chen, Kai",
"Wang, Ye",
"Li, Yitong",
"Li, Aiping",
"Yu, Han",
"Song, Xin"
] | A Unified Temporal Knowledge Graph Reasoning Model Towards Interpolation and Extrapolation | acl-long.8 | Poster | 2405.18106 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.8/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.9.bib | @inproceedings{xu-etal-2024-unsupervised,
title = "Unsupervised Information Refinement Training of Large Language Models for Retrieval-Augmented Generation",
author = "Xu, Shicheng and
Pang, Liang and
Yu, Mo and
Meng, Fandong and
Shen, Huawei and
Cheng, Xueqi and
Zhou, Jie",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.9",
pages = "133--145",
abstract = "Retrieval-augmented generation (RAG) enhances large language models (LLMs) by incorporating additional information from retrieval. However, studies have shown that LLMs still face challenges in effectively using the retrieved information, even ignore it or be misled by it. The key reason is that the training of LLMs does not clearly make LLMs learn how to utilize input retrieved texts with varied quality. In this paper, we propose a novel perspective that considers the role of LLMs in RAG as {``}Information Refiner{''}, which means that regardless of correctness, completeness, or usefulness of retrieved texts, LLMs can consistently integrate knowledge within the retrieved texts and model parameters to generate the texts that are more concise, accurate, and complete than the retrieved texts. To this end, we propose an information refinement training method named INFO-RAG that optimizes LLMs for RAG in an unsupervised manner. INFO-RAG is low-cost and general across various tasks. Extensive experiments on zero-shot prediction of 11 datasets in diverse tasks including Question Answering, Slot-Filling, Language Modeling, Dialogue, and Code Generation show that INFO-RAG improves the performance of LLaMA2 by an average of 9.39{\%} relative points. INFO-RAG also shows advantages in in-context learning and robustness of RAG.",
}
| Retrieval-augmented generation (RAG) enhances large language models (LLMs) by incorporating additional information from retrieval. However, studies have shown that LLMs still face challenges in effectively using the retrieved information, even ignore it or be misled by it. The key reason is that the training of LLMs does not clearly make LLMs learn how to utilize input retrieved texts with varied quality. In this paper, we propose a novel perspective that considers the role of LLMs in RAG as {``}Information Refiner{''}, which means that regardless of correctness, completeness, or usefulness of retrieved texts, LLMs can consistently integrate knowledge within the retrieved texts and model parameters to generate the texts that are more concise, accurate, and complete than the retrieved texts. To this end, we propose an information refinement training method named INFO-RAG that optimizes LLMs for RAG in an unsupervised manner. INFO-RAG is low-cost and general across various tasks. Extensive experiments on zero-shot prediction of 11 datasets in diverse tasks including Question Answering, Slot-Filling, Language Modeling, Dialogue, and Code Generation show that INFO-RAG improves the performance of LLaMA2 by an average of 9.39{\%} relative points. INFO-RAG also shows advantages in in-context learning and robustness of RAG. | [
"Xu, Shicheng",
"Pang, Liang",
"Yu, Mo",
"Meng, F",
"ong",
"Shen, Huawei",
"Cheng, Xueqi",
"Zhou, Jie"
] | Unsupervised Information Refinement Training of Large Language Models for Retrieval-Augmented Generation | acl-long.9 | Poster | 2402.18150 | [
"https://github.com/xsc1234/info-rag"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.9/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.10.bib | @inproceedings{hu-etal-2024-cscd,
title = "{CSCD}-{NS}: a {C}hinese Spelling Check Dataset for Native Speakers",
author = "Hu, Yong and
Meng, Fandong and
Zhou, Jie",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.10",
pages = "146--159",
abstract = "In this paper, we present CSCD-NS, the first Chinese spelling check (CSC) dataset designed for native speakers, containing 40,000 samples from a Chinese social platform. Compared with existing CSC datasets aimed at Chinese learners, CSCD-NS is ten times larger in scale and exhibits a distinct error distribution, with a significantly higher proportion of word-level errors. To further enhance the data resource, we propose a novel method that simulates the input process through an input method, generating large-scale and high-quality pseudo data that closely resembles the actual error distribution and outperforms existing methods. Moreover, we investigate the performance of various models in this scenario, including large language models (LLMs), such as ChatGPT. The result indicates that generative models underperform BERT-like classification models due to strict length and pronunciation constraints. The high prevalence of word-level errors also makes CSC for native speakers challenging enough, leaving substantial room for improvement.",
}
| In this paper, we present CSCD-NS, the first Chinese spelling check (CSC) dataset designed for native speakers, containing 40,000 samples from a Chinese social platform. Compared with existing CSC datasets aimed at Chinese learners, CSCD-NS is ten times larger in scale and exhibits a distinct error distribution, with a significantly higher proportion of word-level errors. To further enhance the data resource, we propose a novel method that simulates the input process through an input method, generating large-scale and high-quality pseudo data that closely resembles the actual error distribution and outperforms existing methods. Moreover, we investigate the performance of various models in this scenario, including large language models (LLMs), such as ChatGPT. The result indicates that generative models underperform BERT-like classification models due to strict length and pronunciation constraints. The high prevalence of word-level errors also makes CSC for native speakers challenging enough, leaving substantial room for improvement. | [
"Hu, Yong",
"Meng, F",
"ong",
"Zhou, Jie"
] | CSCD-NS: a Chinese Spelling Check Dataset for Native Speakers | acl-long.10 | Poster | 2211.08788 | [
"https://github.com/nghuyong/cscd-ime"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.10/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.11.bib | @inproceedings{karakkaparambil-james-etal-2024-evaluating,
title = "Evaluating Dynamic Topic Models",
author = "Karakkaparambil James, Charu and
Nagda, Mayank and
Haji Ghassemi, Nooshin and
Kloft, Marius and
Fellenz, Sophie",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.11",
pages = "160--176",
abstract = "There is a lack of quantitative measures to evaluate the progression of topics through time in dynamic topic models (DTMs). Filling this gap, we propose a novel evaluation measure for DTMs that analyzes the changes in the quality of each topic over time. Additionally, we propose an extension combining topic quality with the model{'}s temporal consistency. We demonstrate the utility of the proposed measure by applying it to synthetic data and data from existing DTMs, including DTMs from large language models (LLMs). We also show that the proposed measure correlates well with human judgment. Our findings may help in identifying changing topics, evaluating different DTMs and LLMs, and guiding future research in this area.",
}
| There is a lack of quantitative measures to evaluate the progression of topics through time in dynamic topic models (DTMs). Filling this gap, we propose a novel evaluation measure for DTMs that analyzes the changes in the quality of each topic over time. Additionally, we propose an extension combining topic quality with the model{'}s temporal consistency. We demonstrate the utility of the proposed measure by applying it to synthetic data and data from existing DTMs, including DTMs from large language models (LLMs). We also show that the proposed measure correlates well with human judgment. Our findings may help in identifying changing topics, evaluating different DTMs and LLMs, and guiding future research in this area. | [
"Karakkaparambil James, Charu",
"Nagda, Mayank",
"Haji Ghassemi, Nooshin",
"Kloft, Marius",
"Fellenz, Sophie"
] | Evaluating Dynamic Topic Models | acl-long.11 | Poster | 2309.08627 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.11/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.12.bib | @inproceedings{dong-etal-2024-abilities,
title = "How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data Composition",
author = "Dong, Guanting and
Yuan, Hongyi and
Lu, Keming and
Li, Chengpeng and
Xue, Mingfeng and
Liu, Dayiheng and
Wang, Wei and
Yuan, Zheng and
Zhou, Chang and
Zhou, Jingren",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.12",
pages = "177--198",
abstract = "Large language models (LLMs) with enormous pre-training tokens and parameters emerge diverse abilities, including math reasoning, codegeneration, and instruction following. These abilities are further enhanced by supervised fine-tuning (SFT). While the open-source community has explored ad-hoc SFT for enhancing individual capabilities, proprietary LLMs exhibit versatility across various skills. Therefore, understanding the facilitation of multiple abilities via SFT is paramount. In this study, we specificially focuses on the interplay of data composition between mathematical reasoning, code generation, and general human-aligning abilities during SFT. We propose four intriguing research questions to explore the association between model performance and various factors including data amount, composition ratio, model size and SFT strategies. Our experiments reveal that distinct capabilities scale differently and larger models generally show superior performance with same amount of data. Mathematical reasoning and code generation consistently improve with increasing data amount, whereas general abilities plateau after roughly a thousand samples. Moreover, we observe data composition appears to enhance various abilities under limited data conditions, yet can lead to performance conflicts when data is plentiful. Our findings also suggest the amount of composition data influences performance more than the composition ratio. In analysis of SFT strategies, we find that sequentially learning multiple skills risks catastrophic forgetting. Our proposed Dual-stage Mixed Fine-tuning (DMT) strategy offers a promising solution to learn multiple abilities with different scaling patterns.",
}
| Large language models (LLMs) with enormous pre-training tokens and parameters emerge diverse abilities, including math reasoning, codegeneration, and instruction following. These abilities are further enhanced by supervised fine-tuning (SFT). While the open-source community has explored ad-hoc SFT for enhancing individual capabilities, proprietary LLMs exhibit versatility across various skills. Therefore, understanding the facilitation of multiple abilities via SFT is paramount. In this study, we specificially focuses on the interplay of data composition between mathematical reasoning, code generation, and general human-aligning abilities during SFT. We propose four intriguing research questions to explore the association between model performance and various factors including data amount, composition ratio, model size and SFT strategies. Our experiments reveal that distinct capabilities scale differently and larger models generally show superior performance with same amount of data. Mathematical reasoning and code generation consistently improve with increasing data amount, whereas general abilities plateau after roughly a thousand samples. Moreover, we observe data composition appears to enhance various abilities under limited data conditions, yet can lead to performance conflicts when data is plentiful. Our findings also suggest the amount of composition data influences performance more than the composition ratio. In analysis of SFT strategies, we find that sequentially learning multiple skills risks catastrophic forgetting. Our proposed Dual-stage Mixed Fine-tuning (DMT) strategy offers a promising solution to learn multiple abilities with different scaling patterns. | [
"Dong, Guanting",
"Yuan, Hongyi",
"Lu, Keming",
"Li, Chengpeng",
"Xue, Mingfeng",
"Liu, Dayiheng",
"Wang, Wei",
"Yuan, Zheng",
"Zhou, Chang",
"Zhou, Jingren"
] | How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data Composition | acl-long.12 | Poster | 2310.05492 | [
"https://github.com/ofa-sys/gsm8k-screl"
] | https://huggingface.co/papers/2310.05492 | 3 | 2 | 0 | 10 | https://aclanthology.org/2024.acl-long.12/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.13.bib | @inproceedings{xu-etal-2024-lens,
title = "Through the Lens of Split Vote: Exploring Disagreement, Difficulty and Calibration in Legal Case Outcome Classification",
author = "Xu, Shanshan and
T.y.s.s, Santosh and
Ichim, Oana and
Plank, Barbara and
Grabmair, Matthias",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.13",
pages = "199--216",
abstract = "In legal decisions, split votes (SV) occur when judges cannot reach a unanimous decision, posing a difficulty for lawyers who must navigate diverse legal arguments and opinions. In high-stakes domains, {\%}as human-AI interaction systems become increasingly important, understanding the alignment of perceived difficulty between humans and AI systems is crucial to build trust. However, existing NLP calibration methods focus on a classifier{'}s awareness of predictive performance, measured against the human majority class, overlooking inherent human label variation (HLV). This paper explores split votes as naturally observable human disagreement and value pluralism. We collect judges{'} vote distributions from the European Court of Human Rights (ECHR), and present SV-ECHR, a case outcome classification (COC) dataset with SV information. We build a taxonomy of disagreement with SV-specific subcategories. We further assess the alignment of perceived difficulty between models and humans, as well as confidence- and human-calibration of COC models. We observe limited alignment with the judge vote distribution. To our knowledge, this is the first systematic exploration of calibration to human judgements in legal NLP. Our study underscores the necessity for further research on measuring and enhancing model calibration considering HLV in legal decision tasks.",
}
| In legal decisions, split votes (SV) occur when judges cannot reach a unanimous decision, posing a difficulty for lawyers who must navigate diverse legal arguments and opinions. In high-stakes domains, {\%}as human-AI interaction systems become increasingly important, understanding the alignment of perceived difficulty between humans and AI systems is crucial to build trust. However, existing NLP calibration methods focus on a classifier{'}s awareness of predictive performance, measured against the human majority class, overlooking inherent human label variation (HLV). This paper explores split votes as naturally observable human disagreement and value pluralism. We collect judges{'} vote distributions from the European Court of Human Rights (ECHR), and present SV-ECHR, a case outcome classification (COC) dataset with SV information. We build a taxonomy of disagreement with SV-specific subcategories. We further assess the alignment of perceived difficulty between models and humans, as well as confidence- and human-calibration of COC models. We observe limited alignment with the judge vote distribution. To our knowledge, this is the first systematic exploration of calibration to human judgements in legal NLP. Our study underscores the necessity for further research on measuring and enhancing model calibration considering HLV in legal decision tasks. | [
"Xu, Shanshan",
"T.y.s.s, Santosh",
"Ichim, Oana",
"Plank, Barbara",
"Grabmair, Matthias"
] | Through the Lens of Split Vote: Exploring Disagreement, Difficulty and Calibration in Legal Case Outcome Classification | acl-long.13 | Oral | 2402.07214 | [
""
] | https://huggingface.co/papers/2402.07214 | 0 | 0 | 0 | 5 | https://aclanthology.org/2024.acl-long.13/ | [] | [
"sxu/SV-ECHR"
] | [] | 1 |
https://aclanthology.org/2024.acl-long.14.bib | @inproceedings{dalal-etal-2024-inference,
title = "Inference to the Best Explanation in Large Language Models",
author = "Dalal, Dhairya and
Valentino, Marco and
Freitas, Andre and
Buitelaar, Paul",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.14",
pages = "217--235",
abstract = "While Large Language Models (LLMs) have found success in real-world applications, their underlying explanatory process is still poorly understood. This paper proposes \textit{IBE-Eval}, a framework inspired by philosophical accounts on \textit{Inference to the Best Explanation (IBE)} to advance the interpretation and evaluation of LLMs{'} explanations. \textit{IBE-Eval} estimates the plausibility of natural language explanations through a combination of explicit logical and linguistic features including: \textit{consistency}, \textit{parsimony}, \textit{coherence}, and \textit{uncertainty}. Extensive experiments are conducted on \textit{Causal Question Answering (CQA)}, where \textit{IBE-Eval} is tasked to select the most plausible causal explanation amongst competing ones generated by LLMs (i.e., GPT 3.5 and Llama 2). The experiments reveal that \textit{IBE-Eval} can successfully identify the best explanation with up to 77{\%} accuracy ($\approx 27\%$ above random), improving upon a GPT 3.5-as-a-Judge baseline ($\approx+17\%$) while being intrinsically more efficient and interpretable. Additional analyses suggest that, despite model-specific variances, LLM-generated explanations tend to conform to IBE criteria and that \textit{IBE-Eval} is significantly correlated with human judgment, opening up opportunities for future development of automated explanation verification tools.",
}
| While Large Language Models (LLMs) have found success in real-world applications, their underlying explanatory process is still poorly understood. This paper proposes \textit{IBE-Eval}, a framework inspired by philosophical accounts on \textit{Inference to the Best Explanation (IBE)} to advance the interpretation and evaluation of LLMs{'} explanations. \textit{IBE-Eval} estimates the plausibility of natural language explanations through a combination of explicit logical and linguistic features including: \textit{consistency}, \textit{parsimony}, \textit{coherence}, and \textit{uncertainty}. Extensive experiments are conducted on \textit{Causal Question Answering (CQA)}, where \textit{IBE-Eval} is tasked to select the most plausible causal explanation amongst competing ones generated by LLMs (i.e., GPT 3.5 and Llama 2). The experiments reveal that \textit{IBE-Eval} can successfully identify the best explanation with up to 77{\%} accuracy ($\approx 27\%$ above random), improving upon a GPT 3.5-as-a-Judge baseline ($\approx+17\%$) while being intrinsically more efficient and interpretable. Additional analyses suggest that, despite model-specific variances, LLM-generated explanations tend to conform to IBE criteria and that \textit{IBE-Eval} is significantly correlated with human judgment, opening up opportunities for future development of automated explanation verification tools. | [
"Dalal, Dhairya",
"Valentino, Marco",
"Freitas, Andre",
"Buitelaar, Paul"
] | Inference to the Best Explanation in Large Language Models | acl-long.14 | Poster | 2402.10767 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.14/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.15.bib | @inproceedings{poesina-etal-2024-novel,
title = "A Novel Cartography-Based Curriculum Learning Method Applied on {R}o{NLI}: The First {R}omanian Natural Language Inference Corpus",
author = "Poesina, Eduard and
Caragea, Cornelia and
Ionescu, Radu",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.15",
pages = "236--253",
abstract = "Natural language inference (NLI), the task of recognizing the entailment relationship in sentence pairs, is an actively studied topic serving as a proxy for natural language understanding. Despite the relevance of the task in building conversational agents and improving text classification, machine translation and other NLP tasks, to the best of our knowledge, there is no publicly available NLI corpus for the Romanian language. To this end, we introduce the first Romanian NLI corpus (RoNLI) comprising 58K training sentence pairs, which are obtained via distant supervision, and 6K validation and test sentence pairs, which are manually annotated with the correct labels. We conduct experiments with multiple machine learning methods based on distant learning, ranging from shallow models based on word embeddings to transformer-based neural networks, to establish a set of competitive baselines. Furthermore, we improve on the best model by employing a new curriculum learning strategy based on data cartography. Our dataset and code to reproduce the baselines are available at https://github.com/Eduard6421/RONLI.",
}
| Natural language inference (NLI), the task of recognizing the entailment relationship in sentence pairs, is an actively studied topic serving as a proxy for natural language understanding. Despite the relevance of the task in building conversational agents and improving text classification, machine translation and other NLP tasks, to the best of our knowledge, there is no publicly available NLI corpus for the Romanian language. To this end, we introduce the first Romanian NLI corpus (RoNLI) comprising 58K training sentence pairs, which are obtained via distant supervision, and 6K validation and test sentence pairs, which are manually annotated with the correct labels. We conduct experiments with multiple machine learning methods based on distant learning, ranging from shallow models based on word embeddings to transformer-based neural networks, to establish a set of competitive baselines. Furthermore, we improve on the best model by employing a new curriculum learning strategy based on data cartography. Our dataset and code to reproduce the baselines are available at https://github.com/Eduard6421/RONLI. | [
"Poesina, Eduard",
"Caragea, Cornelia",
"Ionescu, Radu"
] | A Novel Cartography-Based Curriculum Learning Method Applied on RoNLI: The First Romanian Natural Language Inference Corpus | acl-long.15 | Poster | 2405.11877 | [
"https://github.com/eduard6421/ronli"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.15/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.16.bib | @inproceedings{chen-etal-2024-minprompt,
title = "{M}in{P}rompt: Graph-based Minimal Prompt Data Augmentation for Few-shot Question Answering",
author = "Chen, Xiusi and
Jiang, Jyun-Yu and
Chang, Wei-Cheng and
Hsieh, Cho-Jui and
Yu, Hsiang-Fu and
Wang, Wei",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.16",
pages = "254--266",
abstract = "Recent advances in few-shot question answering (QA) mostly rely on the power of pre-trained large language models (LLMs) and fine-tuning in specific settings. Although the pre-training stage has already equipped LLMs with powerful reasoning capabilities, LLMs still need to be fine-tuned to adapt to specific domains to achieve the best results. In this paper, we propose to select the most informative data for fine-tuning, thereby improving the efficiency of the fine-tuning process with comparative or even better accuracy on the open-domain QA task. We present MinPrompt, a minimal data augmentation framework for open-domain QA based on an approximate graph algorithm and unsupervised question generation. We transform the raw text into a graph structure to build connections between different factual sentences, then apply graph algorithms to identify the minimal set of sentences needed to cover the most information in the raw text. We then generate QA pairs based on the identified sentence subset and train the model on the selected sentences to obtain the final model. Empirical results on several benchmark datasets and theoretical analysis show that MinPrompt is able to achieve comparable or better results than baselines with a high degree of efficiency, bringing consistent improvements in F-1 scores.",
}
| Recent advances in few-shot question answering (QA) mostly rely on the power of pre-trained large language models (LLMs) and fine-tuning in specific settings. Although the pre-training stage has already equipped LLMs with powerful reasoning capabilities, LLMs still need to be fine-tuned to adapt to specific domains to achieve the best results. In this paper, we propose to select the most informative data for fine-tuning, thereby improving the efficiency of the fine-tuning process with comparative or even better accuracy on the open-domain QA task. We present MinPrompt, a minimal data augmentation framework for open-domain QA based on an approximate graph algorithm and unsupervised question generation. We transform the raw text into a graph structure to build connections between different factual sentences, then apply graph algorithms to identify the minimal set of sentences needed to cover the most information in the raw text. We then generate QA pairs based on the identified sentence subset and train the model on the selected sentences to obtain the final model. Empirical results on several benchmark datasets and theoretical analysis show that MinPrompt is able to achieve comparable or better results than baselines with a high degree of efficiency, bringing consistent improvements in F-1 scores. | [
"Chen, Xiusi",
"Jiang, Jyun-Yu",
"Chang, Wei-Cheng",
"Hsieh, Cho-Jui",
"Yu, Hsiang-Fu",
"Wang, Wei"
] | MinPrompt: Graph-based Minimal Prompt Data Augmentation for Few-shot Question Answering | acl-long.16 | Poster | 2310.05007 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.16/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.17.bib | @inproceedings{hu-etal-2024-sportsmetrics,
title = "{S}ports{M}etrics: Blending Text and Numerical Data to Understand Information Fusion in {LLM}s",
author = "Hu, Yebowen and
Song, Kaiqiang and
Cho, Sangwoo and
Wang, Xiaoyang and
Foroosh, Hassan and
Yu, Dong and
Liu, Fei",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.17",
pages = "267--278",
abstract = "Large language models hold significant potential for integrating various data types, such as text documents and database records, for advanced analytics. However, blending text and numerical data presents substantial challenges. LLMs need to process and cross-reference entities and numbers, handle data inconsistencies and redundancies, and develop planning capabilities such as building a working memory for managing complex data queries. In this paper, we introduce four novel tasks centered around sports data analytics to evaluate the numerical reasoning and information fusion capabilities of LLMs. These tasks involve providing LLMs with detailed, play-by-play sports game descriptions, then challenging them with adversarial scenarios such as new game rules, longer durations, scrambled narratives, and analyzing key statistics in game summaries. We conduct extensive experiments on NBA and NFL games to assess the performance of LLMs on these tasks. Our benchmark, SportsMetrics, introduces a new mechanism for assessing LLMs{'} numerical reasoning and fusion skills.",
}
| Large language models hold significant potential for integrating various data types, such as text documents and database records, for advanced analytics. However, blending text and numerical data presents substantial challenges. LLMs need to process and cross-reference entities and numbers, handle data inconsistencies and redundancies, and develop planning capabilities such as building a working memory for managing complex data queries. In this paper, we introduce four novel tasks centered around sports data analytics to evaluate the numerical reasoning and information fusion capabilities of LLMs. These tasks involve providing LLMs with detailed, play-by-play sports game descriptions, then challenging them with adversarial scenarios such as new game rules, longer durations, scrambled narratives, and analyzing key statistics in game summaries. We conduct extensive experiments on NBA and NFL games to assess the performance of LLMs on these tasks. Our benchmark, SportsMetrics, introduces a new mechanism for assessing LLMs{'} numerical reasoning and fusion skills. | [
"Hu, Yebowen",
"Song, Kaiqiang",
"Cho, Sangwoo",
"Wang, Xiaoyang",
"Foroosh, Hassan",
"Yu, Dong",
"Liu, Fei"
] | SportsMetrics: Blending Text and Numerical Data to Understand Information Fusion in LLMs | acl-long.17 | Poster | 2402.10979 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.17/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.18.bib | @inproceedings{wang-etal-2024-scimon,
title = "{S}ci{MON}: Scientific Inspiration Machines Optimized for Novelty",
author = "Wang, Qingyun and
Downey, Doug and
Ji, Heng and
Hope, Tom",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.18",
pages = "279--299",
abstract = "We explore and enhance the ability of neural language models to generate novel scientific directions grounded in literature. Work on literature-based hypothesis generation has traditionally focused on binary link prediction{---}severely limiting the expressivity of hypotheses. This line of work also does not focus on optimizing novelty. We take a dramatic departure with a novel setting in which models use as input background contexts (e.g., problems, experimental settings, goals), and output natural language ideas grounded in literature. We present SciMON, a modeling framework that uses retrieval of {``}inspirations{''} from past scientific papers, and explicitly optimizes for novelty by iteratively comparing to prior papers and updating idea suggestions until sufficient novelty is achieved. Comprehensive evaluations reveal that GPT-4 tends to generate ideas with overall low technical depth and novelty, while our methods partially mitigate this issue. Our work represents a first step toward evaluating and developing language models that generate new ideas derived from the scientific literature. Code, data, and resources are publicly available for research purposes: https://github.com/eaglew/clbd.",
}
| We explore and enhance the ability of neural language models to generate novel scientific directions grounded in literature. Work on literature-based hypothesis generation has traditionally focused on binary link prediction{---}severely limiting the expressivity of hypotheses. This line of work also does not focus on optimizing novelty. We take a dramatic departure with a novel setting in which models use as input background contexts (e.g., problems, experimental settings, goals), and output natural language ideas grounded in literature. We present SciMON, a modeling framework that uses retrieval of {``}inspirations{''} from past scientific papers, and explicitly optimizes for novelty by iteratively comparing to prior papers and updating idea suggestions until sufficient novelty is achieved. Comprehensive evaluations reveal that GPT-4 tends to generate ideas with overall low technical depth and novelty, while our methods partially mitigate this issue. Our work represents a first step toward evaluating and developing language models that generate new ideas derived from the scientific literature. Code, data, and resources are publicly available for research purposes: https://github.com/eaglew/clbd. | [
"Wang, Qingyun",
"Downey, Doug",
"Ji, Heng",
"Hope, Tom"
] | SciMON: Scientific Inspiration Machines Optimized for Novelty | acl-long.18 | Poster | 2305.14259 | [
"https://github.com/eaglew/clbd"
] | https://huggingface.co/papers/2305.14259 | 1 | 1 | 0 | 4 | https://aclanthology.org/2024.acl-long.18/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.19.bib | @inproceedings{jian-etal-2024-expedited,
title = "Expedited Training of Visual Conditioned Language Generation via Redundancy Reduction",
author = "Jian, Yiren and
Liu, Tingkai and
Tao, Yunzhe and
Zhang, Chunhui and
Vosoughi, Soroush and
Yang, Hongxia",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.19",
pages = "300--314",
abstract = "We introduce $\text{EVL}_{\text{Gen}}$, a streamlined framework designed for the pre-training of visually conditioned language generation models with high computational demands, utilizing frozen pre-trained large language models (LLMs). The conventional approach in vision-language pre-training (VLP) typically involves a two-stage optimization process: an initial resource-intensive phase dedicated to general-purpose vision-language representation learning, focused on extracting and consolidating relevant visual features. This is followed by a subsequent phase that emphasizes end-to-end alignment between visual and linguistic modalities. Our novel one-stage, single-loss framework bypasses the computationally demanding first training stage by gradually merging similar visual tokens during training, while avoiding model collapse caused by single-stage training of BLIP-2 type models. The gradual merging process effectively condenses visual information while preserving semantic richness, resulting in rapid convergence without compromising performance. Our experimental findings demonstrate that our approach accelerates the training of vision-language models by a factor of 5 without a noticeable impact on overall performance. Furthermore, we illustrate that our models significantly narrow the performance gap to current vision-language models using only 1/10 of the data. Finally, we showcase how our image-text models can seamlessly adapt to video-conditioned language generation tasks through novel soft attentive temporal token contextualizing modules. Code: https://github.com/yiren-jian/EVLGen",
}
| We introduce $\text{EVL}_{\text{Gen}}$, a streamlined framework designed for the pre-training of visually conditioned language generation models with high computational demands, utilizing frozen pre-trained large language models (LLMs). The conventional approach in vision-language pre-training (VLP) typically involves a two-stage optimization process: an initial resource-intensive phase dedicated to general-purpose vision-language representation learning, focused on extracting and consolidating relevant visual features. This is followed by a subsequent phase that emphasizes end-to-end alignment between visual and linguistic modalities. Our novel one-stage, single-loss framework bypasses the computationally demanding first training stage by gradually merging similar visual tokens during training, while avoiding model collapse caused by single-stage training of BLIP-2 type models. The gradual merging process effectively condenses visual information while preserving semantic richness, resulting in rapid convergence without compromising performance. Our experimental findings demonstrate that our approach accelerates the training of vision-language models by a factor of 5 without a noticeable impact on overall performance. Furthermore, we illustrate that our models significantly narrow the performance gap to current vision-language models using only 1/10 of the data. Finally, we showcase how our image-text models can seamlessly adapt to video-conditioned language generation tasks through novel soft attentive temporal token contextualizing modules. Code: https://github.com/yiren-jian/EVLGen | [
"Jian, Yiren",
"Liu, Tingkai",
"Tao, Yunzhe",
"Zhang, Chunhui",
"Vosoughi, Soroush",
"Yang, Hongxia"
] | Expedited Training of Visual Conditioned Language Generation via Redundancy Reduction | acl-long.19 | Oral | 2310.03291 | [
"https://github.com/yiren-jian/evlgen"
] | https://huggingface.co/papers/2310.03291 | 0 | 1 | 0 | 5 | https://aclanthology.org/2024.acl-long.19/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.20.bib | @inproceedings{kumar-etal-2024-confidence,
title = "Confidence Under the Hood: An Investigation into the Confidence-Probability Alignment in Large Language Models",
author = "Kumar, Abhishek and
Morabito, Robert and
Umbet, Sanzhar and
Kabbara, Jad and
Emami, Ali",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.20",
pages = "315--334",
abstract = "As the use of Large Language Models (LLMs) becomes more widespread, understanding their self-evaluation of confidence in generated responses becomes increasingly important as it is integral to the reliability of the output of these models. We introduce the concept of Confidence-Probability Alignment, that connects an LLM{'}s internal confidence, quantified by token probabilities, to the confidence conveyed in the model{'}s response when explicitly asked about its certainty. Using various datasets and prompting techniques that encourage model introspection, we probe the alignment between models{'} internal and expressed confidence. These techniques encompass using structured evaluation scales to rate confidence, including answer options when prompting, and eliciting the model{'}s confidence level for outputs it does not recognize as its own. Notably, among the models analyzed, OpenAI{'}s GPT-4 showed the strongest confidence-probability alignment, with an average Spearman{'}s $\hat{\rho}$ of 0.42, across a wide range of tasks. Our work contributes to the ongoing efforts to facilitate risk assessment in the application of LLMs and to further our understanding of model trustworthiness.",
}
| As the use of Large Language Models (LLMs) becomes more widespread, understanding their self-evaluation of confidence in generated responses becomes increasingly important as it is integral to the reliability of the output of these models. We introduce the concept of Confidence-Probability Alignment, that connects an LLM{'}s internal confidence, quantified by token probabilities, to the confidence conveyed in the model{'}s response when explicitly asked about its certainty. Using various datasets and prompting techniques that encourage model introspection, we probe the alignment between models{'} internal and expressed confidence. These techniques encompass using structured evaluation scales to rate confidence, including answer options when prompting, and eliciting the model{'}s confidence level for outputs it does not recognize as its own. Notably, among the models analyzed, OpenAI{'}s GPT-4 showed the strongest confidence-probability alignment, with an average Spearman{'}s $\hat{\rho}$ of 0.42, across a wide range of tasks. Our work contributes to the ongoing efforts to facilitate risk assessment in the application of LLMs and to further our understanding of model trustworthiness. | [
"Kumar, Abhishek",
"Morabito, Robert",
"Umbet, Sanzhar",
"Kabbara, Jad",
"Emami, Ali"
] | Confidence Under the Hood: An Investigation into the Confidence-Probability Alignment in Large Language Models | acl-long.20 | Poster | 2405.16282 | [
"https://github.com/akkeshav/confidence_probability_alignment"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.20/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.21.bib | @inproceedings{wang-etal-2024-retrieval,
title = "Retrieval-Augmented Multilingual Knowledge Editing",
author = "Wang, Weixuan and
Haddow, Barry and
Birch, Alexandra",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.21",
pages = "335--354",
abstract = "Knowledge represented in Large Language Models (LLMs) is quite often incorrect and can also become obsolete over time. Updating knowledge via fine-tuning is computationally resource-hungry and not reliable, and so knowledge editing (KE) has developed as an effective and economical alternative to inject new knowledge or to fix factual errors in LLMs. Although there has been considerable interest in this area, current KE research exclusively focuses on monolingual settings, typically in English. However, what happens if the new knowledge is supplied in one language, but we would like to query an LLM in a different language? To address the problem of multilingual knowledge editing, we propose Retrieval-Augmented Multilingual Knowledge Editor (ReMaKE) to update knowledge in LLMs. ReMaKE can be used to perform model-agnostic knowledge editing in a multilingual setting. ReMaKE concatenates the new knowledge retrieved from a multilingual knowledge base with users{'} prompts before querying an LLM. Our experimental results show that ReMaKE outperforms baseline knowledge editing methods by a significant margin and is scalable to real-word application scenarios. Our multilingual knowledge editing dataset (MzsRE) in 12 languages, the code, and additional project information are available at https://github.com/weixuan-wang123/ReMaKE.",
}
| Knowledge represented in Large Language Models (LLMs) is quite often incorrect and can also become obsolete over time. Updating knowledge via fine-tuning is computationally resource-hungry and not reliable, and so knowledge editing (KE) has developed as an effective and economical alternative to inject new knowledge or to fix factual errors in LLMs. Although there has been considerable interest in this area, current KE research exclusively focuses on monolingual settings, typically in English. However, what happens if the new knowledge is supplied in one language, but we would like to query an LLM in a different language? To address the problem of multilingual knowledge editing, we propose Retrieval-Augmented Multilingual Knowledge Editor (ReMaKE) to update knowledge in LLMs. ReMaKE can be used to perform model-agnostic knowledge editing in a multilingual setting. ReMaKE concatenates the new knowledge retrieved from a multilingual knowledge base with users{'} prompts before querying an LLM. Our experimental results show that ReMaKE outperforms baseline knowledge editing methods by a significant margin and is scalable to real-word application scenarios. Our multilingual knowledge editing dataset (MzsRE) in 12 languages, the code, and additional project information are available at https://github.com/weixuan-wang123/ReMaKE. | [
"Wang, Weixuan",
"Haddow, Barry",
"Birch, Alex",
"ra"
] | Retrieval-Augmented Multilingual Knowledge Editing | acl-long.21 | Poster | 2312.13040 | [
"https://github.com/vicky-wil/remake"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.21/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.22.bib | @inproceedings{park-etal-2024-picturing,
title = "Picturing Ambiguity: A Visual Twist on the {W}inograd Schema Challenge",
author = "Park, Brendan and
Janecek, Madeline and
Ezzati-Jivan, Naser and
Li, Yifeng and
Emami, Ali",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.22",
pages = "355--374",
abstract = "Large Language Models (LLMs) have demonstrated remarkable success in tasks like the Winograd Schema Challenge (WSC), showcasing advanced textual common-sense reasoning. However, applying this reasoning to multimodal domains, where understanding text and images together is essential, remains a substantial challenge. To address this, we introduce WinoVis, a novel dataset specifically designed to probe text-to-image models on pronoun disambiguation within multimodal contexts. Utilizing GPT-4 for prompt generation and Diffusion Attentive Attribution Maps (DAAM) for heatmap analysis, we propose a novel evaluation framework that isolates the models{'} ability in pronoun disambiguation from other visual processing challenges. Evaluation of successive model versions reveals that, despite incremental advancements, Stable Diffusion 2.0 achieves a precision of 56.7{\%} on WinoVis, only marginally surpassing random guessing. Further error analysis identifies important areas for future research aimed at advancing text-to-image models in their ability to interpret and interact with the complex visual world.",
}
| Large Language Models (LLMs) have demonstrated remarkable success in tasks like the Winograd Schema Challenge (WSC), showcasing advanced textual common-sense reasoning. However, applying this reasoning to multimodal domains, where understanding text and images together is essential, remains a substantial challenge. To address this, we introduce WinoVis, a novel dataset specifically designed to probe text-to-image models on pronoun disambiguation within multimodal contexts. Utilizing GPT-4 for prompt generation and Diffusion Attentive Attribution Maps (DAAM) for heatmap analysis, we propose a novel evaluation framework that isolates the models{'} ability in pronoun disambiguation from other visual processing challenges. Evaluation of successive model versions reveals that, despite incremental advancements, Stable Diffusion 2.0 achieves a precision of 56.7{\%} on WinoVis, only marginally surpassing random guessing. Further error analysis identifies important areas for future research aimed at advancing text-to-image models in their ability to interpret and interact with the complex visual world. | [
"Park, Brendan",
"Janecek, Madeline",
"Ezzati-Jivan, Naser",
"Li, Yifeng",
"Emami, Ali"
] | Picturing Ambiguity: A Visual Twist on the Winograd Schema Challenge | acl-long.22 | Oral | 2405.16277 | [
"https://github.com/bpark2/winovis"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.22/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.23.bib | @inproceedings{kumar-etal-2024-subtle,
title = "Subtle Biases Need Subtler Measures: Dual Metrics for Evaluating Representative and Affinity Bias in Large Language Models",
author = "Kumar, Abhishek and
Yunusov, Sarfaroz and
Emami, Ali",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.23",
pages = "375--392",
abstract = "Research on Large Language Models (LLMs) has often neglected subtle biases that, although less apparent, can significantly influence the models{'} outputs toward particular social narratives. This study addresses two such biases within LLMs: representative bias, which denotes a tendency of LLMs to generate outputs that mirror the experiences of certain identity groups, and affinity bias, reflecting the models{'} evaluative preferences for specific narratives or viewpoints. We introduce two novel metrics to measure these biases: the Representative Bias Score (RBS) and the Affinity Bias Score (ABS), and present the Creativity-Oriented Generation Suite (CoGS), a collection of open-ended tasks such as short story writing and poetry composition, designed with customized rubrics to detect these subtle biases. Our analysis uncovers marked representative biases in prominent LLMs, with a preference for identities associated with being white, straight, and men. Furthermore, our investigation of affinity bias reveals distinctive evaluative patterns within each model, akin to {`}bias fingerprints{'}. This trend is also seen in human evaluators, highlighting a complex interplay between human and machine bias perceptions.",
}
| Research on Large Language Models (LLMs) has often neglected subtle biases that, although less apparent, can significantly influence the models{'} outputs toward particular social narratives. This study addresses two such biases within LLMs: representative bias, which denotes a tendency of LLMs to generate outputs that mirror the experiences of certain identity groups, and affinity bias, reflecting the models{'} evaluative preferences for specific narratives or viewpoints. We introduce two novel metrics to measure these biases: the Representative Bias Score (RBS) and the Affinity Bias Score (ABS), and present the Creativity-Oriented Generation Suite (CoGS), a collection of open-ended tasks such as short story writing and poetry composition, designed with customized rubrics to detect these subtle biases. Our analysis uncovers marked representative biases in prominent LLMs, with a preference for identities associated with being white, straight, and men. Furthermore, our investigation of affinity bias reveals distinctive evaluative patterns within each model, akin to {`}bias fingerprints{'}. This trend is also seen in human evaluators, highlighting a complex interplay between human and machine bias perceptions. | [
"Kumar, Abhishek",
"Yunusov, Sarfaroz",
"Emami, Ali"
] | Subtle Biases Need Subtler Measures: Dual Metrics for Evaluating Representative and Affinity Bias in Large Language Models | acl-long.23 | Poster | 2405.14555 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.23/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.24.bib | @inproceedings{leto-etal-2024-framing,
title = "Framing in the Presence of Supporting Data: A Case Study in {U}.{S}. Economic News",
author = "Leto, Alexandria and
Pickens, Elliot and
Needell, Coen and
Rothschild, David and
Pacheco, Maria",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.24",
pages = "393--415",
abstract = "The mainstream media has much leeway in what it chooses to cover and how it covers it. These choices have real-world consequences on what people know and their subsequent behaviors. However, the lack of objective measures to evaluate editorial choices makes research in this area particularly difficult. In this paper, we argue that there are newsworthy topics where objective measures exist in the form of supporting data and propose a computational framework to analyze editorial choices in this setup. We focus on the economy because the reporting of economic indicators presents us with a relatively easy way to determine both the selection and framing of various publications. Their values provide a ground truth of how the economy is doing relative to how the publications choose to cover it. To do this, we define frame prediction as a set of interdependent tasks. At the article level, we learn to identify the reported stance towards the general state of the economy. Then, for every numerical quantity reported in the article, we learn to identify whether it corresponds to an economic indicator and whether it is being reported in a positive or negative way. To perform our analysis, we track six American publishers and each article that appeared in the top 10 slots of their landing page between 2015 and 2023.",
}
| The mainstream media has much leeway in what it chooses to cover and how it covers it. These choices have real-world consequences on what people know and their subsequent behaviors. However, the lack of objective measures to evaluate editorial choices makes research in this area particularly difficult. In this paper, we argue that there are newsworthy topics where objective measures exist in the form of supporting data and propose a computational framework to analyze editorial choices in this setup. We focus on the economy because the reporting of economic indicators presents us with a relatively easy way to determine both the selection and framing of various publications. Their values provide a ground truth of how the economy is doing relative to how the publications choose to cover it. To do this, we define frame prediction as a set of interdependent tasks. At the article level, we learn to identify the reported stance towards the general state of the economy. Then, for every numerical quantity reported in the article, we learn to identify whether it corresponds to an economic indicator and whether it is being reported in a positive or negative way. To perform our analysis, we track six American publishers and each article that appeared in the top 10 slots of their landing page between 2015 and 2023. | [
"Leto, Alex",
"ria",
"Pickens, Elliot",
"Needell, Coen",
"Rothschild, David",
"Pacheco, Maria"
] | Framing in the Presence of Supporting Data: A Case Study in U.S. Economic News | acl-long.24 | Poster | 2402.14224 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.24/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.25.bib | @inproceedings{wang-etal-2024-mementos,
title = "Mementos: A Comprehensive Benchmark for Multimodal Large Language Model Reasoning over Image Sequences",
author = "Wang, Xiyao and
Zhou, Yuhang and
Liu, Xiaoyu and
Lu, Hongjin and
Xu, Yuancheng and
He, Feihong and
Yoon, Jaehong and
Lu, Taixi and
Liu, Fuxiao and
Bertasius, Gedas and
Bansal, Mohit and
Yao, Huaxiu and
Huang, Furong",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.25",
pages = "416--442",
abstract = "Multimodal Large Language Models (MLLMs) have demonstrated proficiency in handling a variety of visual-language tasks. However, current MLLM benchmarks are predominantly designed to evaluate reasoning based on static information about a single image, and the ability of modern MLLMs to extrapolate from image sequences, which is essential for understanding our ever-changing world, has been less investigated. To address this challenge, this paper introduces Mementos, a new benchmark designed to assess MLLMs{'} sequential image reasoning abilities. Mementos features 4,761 diverse image sequences with varying lengths. We also employ a GPT-4 assisted method to evaluate MLLM reasoning performance. Through a careful evaluation of nine recent MLLMs on Mementos, including GPT-4V and Gemini, we find that they struggle to accurately describe dynamic information about given image sequences, often leading to hallucinations/misrepresentations of objects and their corresponding behaviors. Our quantitative analysis and case studies identify three key factors impacting MLLMs{'} sequential image reasoning: the correlation between object and behavioral hallucinations, the influence of co-occurring behaviors, and the compounding impact of behavioral hallucinations.",
}
| Multimodal Large Language Models (MLLMs) have demonstrated proficiency in handling a variety of visual-language tasks. However, current MLLM benchmarks are predominantly designed to evaluate reasoning based on static information about a single image, and the ability of modern MLLMs to extrapolate from image sequences, which is essential for understanding our ever-changing world, has been less investigated. To address this challenge, this paper introduces Mementos, a new benchmark designed to assess MLLMs{'} sequential image reasoning abilities. Mementos features 4,761 diverse image sequences with varying lengths. We also employ a GPT-4 assisted method to evaluate MLLM reasoning performance. Through a careful evaluation of nine recent MLLMs on Mementos, including GPT-4V and Gemini, we find that they struggle to accurately describe dynamic information about given image sequences, often leading to hallucinations/misrepresentations of objects and their corresponding behaviors. Our quantitative analysis and case studies identify three key factors impacting MLLMs{'} sequential image reasoning: the correlation between object and behavioral hallucinations, the influence of co-occurring behaviors, and the compounding impact of behavioral hallucinations. | [
"Wang, Xiyao",
"Zhou, Yuhang",
"Liu, Xiaoyu",
"Lu, Hongjin",
"Xu, Yuancheng",
"He, Feihong",
"Yoon, Jaehong",
"Lu, Taixi",
"Liu, Fuxiao",
"Bertasius, Gedas",
"Bansal, Mohit",
"Yao, Huaxiu",
"Huang, Furong"
] | Mementos: A Comprehensive Benchmark for Multimodal Large Language Model Reasoning over Image Sequences | acl-long.25 | Poster | 2401.10529 | [
"https://github.com/umd-huang-lab/mementos"
] | https://huggingface.co/papers/2401.10529 | 4 | 1 | 0 | 12 | https://aclanthology.org/2024.acl-long.25/ | [] | [
"furonghuang-lab/Mementos"
] | [] | 1 |
https://aclanthology.org/2024.acl-long.26.bib | @inproceedings{gao-etal-2024-ttm,
title = "{TTM}-{RE}: Memory-Augmented Document-Level Relation Extraction",
author = "Gao, Chufan and
Wang, Xuan and
Sun, Jimeng",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.26",
pages = "443--458",
abstract = "Document-level relation extraction aims to categorize the association between any two entities within a document.We find that previous methods for document-level relation extraction are ineffective in exploiting the full potential of large amounts of training data with varied noise levels. For example, in the ReDocRED benchmark dataset, state-of-the-art methods trained on the large-scale, lower-quality, distantly supervised training data generally do not perform better than those trained solely on the smaller, high-quality, human-annotated training data. To unlock the full potential of large-scale noisy training data for document-level relation extraction, we propose TTM-RE, a novel approach that integrates a trainable memory module, known as the Token Turing Machine, with a noisy-robust loss function that accounts for the positive-unlabeled setting. The trainable memory module enhances knowledge extraction from the large-scale noisy training dataset through an explicit learning of the memory tokens and a soft integration of the learned memory tokens into the input representation, thereby improving the model{'}s effectiveness for the final relation classification. Extensive experiments on ReDocRED, a benchmark dataset for document-level relation extraction, reveal that TTM-RE achieves state-of-the-art performance (with an absolute F1 score improvement of over 3{\%}). Ablation studies further illustrate the superiority of TTM-RE in other domains (the ChemDisGene dataset in the biomedical domain) and under highly unlabeled settings.",
}
| Document-level relation extraction aims to categorize the association between any two entities within a document.We find that previous methods for document-level relation extraction are ineffective in exploiting the full potential of large amounts of training data with varied noise levels. For example, in the ReDocRED benchmark dataset, state-of-the-art methods trained on the large-scale, lower-quality, distantly supervised training data generally do not perform better than those trained solely on the smaller, high-quality, human-annotated training data. To unlock the full potential of large-scale noisy training data for document-level relation extraction, we propose TTM-RE, a novel approach that integrates a trainable memory module, known as the Token Turing Machine, with a noisy-robust loss function that accounts for the positive-unlabeled setting. The trainable memory module enhances knowledge extraction from the large-scale noisy training dataset through an explicit learning of the memory tokens and a soft integration of the learned memory tokens into the input representation, thereby improving the model{'}s effectiveness for the final relation classification. Extensive experiments on ReDocRED, a benchmark dataset for document-level relation extraction, reveal that TTM-RE achieves state-of-the-art performance (with an absolute F1 score improvement of over 3{\%}). Ablation studies further illustrate the superiority of TTM-RE in other domains (the ChemDisGene dataset in the biomedical domain) and under highly unlabeled settings. | [
"Gao, Chufan",
"Wang, Xuan",
"Sun, Jimeng"
] | TTM-RE: Memory-Augmented Document-Level Relation Extraction | acl-long.26 | Poster | 2406.05906 | [
"https://github.com/chufangao/ttm-re"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.26/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.27.bib | @inproceedings{peng-etal-2024-answer,
title = "Answer is All You Need: Instruction-following Text Embedding via Answering the Question",
author = "Peng, Letian and
Zhang, Yuwei and
Wang, Zilong and
Srinivasa, Jayanth and
Liu, Gaowen and
Wang, Zihan and
Shang, Jingbo",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.27",
pages = "459--477",
abstract = "This work aims to build a text embedder that can capture characteristics of texts specified by user instructions clarifying the similarity criterion. While previous methods improve general task awareness by injecting the instruction information into encoding, they fail to be sensitive to clearer criteria like {``}evaluate similarity based on emotion{''}. We instead propose a different viewpoint, which treats the instruction as a {``}question{''} about the input text and encodes the expected answers to obtain the representation accordingly. Intuitively, texts with the same (implicit) semantics would share similar answers following the instruction, thus leading to more similar representations. Specifically, we propose InBedder that instantiates this learning-to-answer idea by only fine-tuning language models via abstractive question answering tasks. Despite its simplicity, InBedder demonstrates significantly improved instruction-following capabilities according to our proposed instruction awareness tests and instruction robustness tests, when applied to language models with large language models (LLMs) (e.g., llama-2-7b) and smaller encoder-based LMs (e.g., roberta-large). Additionally, our qualitative analysis of clustering outcomes, achieved by applying diverse instructions to the same unlabeled corpus, demonstrates a high degree of interpretability in the clusters formed.",
}
| This work aims to build a text embedder that can capture characteristics of texts specified by user instructions clarifying the similarity criterion. While previous methods improve general task awareness by injecting the instruction information into encoding, they fail to be sensitive to clearer criteria like {``}evaluate similarity based on emotion{''}. We instead propose a different viewpoint, which treats the instruction as a {``}question{''} about the input text and encodes the expected answers to obtain the representation accordingly. Intuitively, texts with the same (implicit) semantics would share similar answers following the instruction, thus leading to more similar representations. Specifically, we propose InBedder that instantiates this learning-to-answer idea by only fine-tuning language models via abstractive question answering tasks. Despite its simplicity, InBedder demonstrates significantly improved instruction-following capabilities according to our proposed instruction awareness tests and instruction robustness tests, when applied to language models with large language models (LLMs) (e.g., llama-2-7b) and smaller encoder-based LMs (e.g., roberta-large). Additionally, our qualitative analysis of clustering outcomes, achieved by applying diverse instructions to the same unlabeled corpus, demonstrates a high degree of interpretability in the clusters formed. | [
"Peng, Letian",
"Zhang, Yuwei",
"Wang, Zilong",
"Srinivasa, Jayanth",
"Liu, Gaowen",
"Wang, Zihan",
"Shang, Jingbo"
] | Answer is All You Need: Instruction-following Text Embedding via Answering the Question | acl-long.27 | Poster | 2402.09642 | [
"https://github.com/zhang-yu-wei/inbedder"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.27/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.28.bib | @inproceedings{zhou-etal-2024-explore,
title = "Explore Spurious Correlations at the Concept Level in Language Models for Text Classification",
author = "Zhou, Yuhang and
Xu, Paiheng and
Liu, Xiaoyu and
An, Bang and
Ai, Wei and
Huang, Furong",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.28",
pages = "478--492",
abstract = "Language models (LMs) have achieved notable success in numerous NLP tasks, employing both fine-tuning and in-context learning (ICL) methods. While language models demonstrate exceptional performance, they face robustness challenges due to spurious correlations arising from imbalanced label distributions in training data or ICL exemplars. Previous research has primarily concentrated on word, phrase, and syntax features, neglecting the concept level, often due to the absence of concept labels and difficulty in identifying conceptual content in input texts. This paper introduces two main contributions. First, we employ ChatGPT to assign concept labels to texts, assessing concept bias in models during fine-tuning or ICL on test data. We find that LMs, when encountering spurious correlations between a concept and a label in training or prompts, resort to shortcuts for predictions. Second, we introduce a data rebalancing technique that incorporates ChatGPT-generated counterfactual data, thereby balancing label distribution and mitigating spurious correlations. Our method{'}s efficacy, surpassing traditional token removal approaches, is validated through extensive testing.",
}
| Language models (LMs) have achieved notable success in numerous NLP tasks, employing both fine-tuning and in-context learning (ICL) methods. While language models demonstrate exceptional performance, they face robustness challenges due to spurious correlations arising from imbalanced label distributions in training data or ICL exemplars. Previous research has primarily concentrated on word, phrase, and syntax features, neglecting the concept level, often due to the absence of concept labels and difficulty in identifying conceptual content in input texts. This paper introduces two main contributions. First, we employ ChatGPT to assign concept labels to texts, assessing concept bias in models during fine-tuning or ICL on test data. We find that LMs, when encountering spurious correlations between a concept and a label in training or prompts, resort to shortcuts for predictions. Second, we introduce a data rebalancing technique that incorporates ChatGPT-generated counterfactual data, thereby balancing label distribution and mitigating spurious correlations. Our method{'}s efficacy, surpassing traditional token removal approaches, is validated through extensive testing. | [
"Zhou, Yuhang",
"Xu, Paiheng",
"Liu, Xiaoyu",
"An, Bang",
"Ai, Wei",
"Huang, Furong"
] | Explore Spurious Correlations at the Concept Level in Language Models for Text Classification | acl-long.28 | Poster | 2311.08648 | [
"https://github.com/tonyzhou98/concept-spurious-correlation"
] | https://huggingface.co/papers/2311.08648 | 1 | 2 | 0 | 6 | https://aclanthology.org/2024.acl-long.28/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.29.bib | @inproceedings{cheng-etal-2024-every,
title = "Every Answer Matters: Evaluating Commonsense with Probabilistic Measures",
author = "Cheng, Qi and
Boratko, Michael and
Yelugam, Pranay Kumar and
O{'}Gorman, Tim and
Singh, Nalini and
McCallum, Andrew and
Li, Xiang",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.29",
pages = "493--506",
abstract = "Large language models have demonstrated impressive performance on commonsense tasks; however, these tasks are often posed as multiple-choice questions, allowing models to exploit systematic biases. Commonsense is also inherently probabilistic with multiple correct answers. The purpose of {``}boiling water{''} could be making tea, cooking but also could be killing germs. Existing tasks do not capture the probabilistic nature of common sense. To this end, we present commonsense frame completion (CFC), a new generative task that evaluates common sense via multiple open-ended generations. We also propose a method of probabilistic evaluation that strongly correlates with human judgments. Humans drastically outperform strong language model baselines on our dataset, indicating this approach is both a challenging and useful evaluation of machine common sense.",
}
| Large language models have demonstrated impressive performance on commonsense tasks; however, these tasks are often posed as multiple-choice questions, allowing models to exploit systematic biases. Commonsense is also inherently probabilistic with multiple correct answers. The purpose of {``}boiling water{''} could be making tea, cooking but also could be killing germs. Existing tasks do not capture the probabilistic nature of common sense. To this end, we present commonsense frame completion (CFC), a new generative task that evaluates common sense via multiple open-ended generations. We also propose a method of probabilistic evaluation that strongly correlates with human judgments. Humans drastically outperform strong language model baselines on our dataset, indicating this approach is both a challenging and useful evaluation of machine common sense. | [
"Cheng, Qi",
"Boratko, Michael",
"Yelugam, Pranay Kumar",
"O{'}Gorman, Tim",
"Singh, Nalini",
"McCallum, Andrew",
"Li, Xiang"
] | Every Answer Matters: Evaluating Commonsense with Probabilistic Measures | acl-long.29 | Poster | 2406.04145 | [
"https://github.com/qxc101/probeval_cfc"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.29/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.30.bib | @inproceedings{xie-etal-2024-gradsafe,
title = "{G}rad{S}afe: Detecting Jailbreak Prompts for {LLM}s via Safety-Critical Gradient Analysis",
author = "Xie, Yueqi and
Fang, Minghong and
Pi, Renjie and
Gong, Neil",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.30",
pages = "507--518",
abstract = "Large Language Models (LLMs) face threats from jailbreak prompts. Existing methods for detecting jailbreak prompts are primarily online moderation APIs or finetuned LLMs. These strategies, however, often require extensive and resource-intensive data collection and training processes. In this study, we propose GradSafe, which effectively detects jailbreak prompts by scrutinizing the gradients of safety-critical parameters in LLMs. Our method is grounded in a pivotal observation: the gradients of an LLM{'}s loss for jailbreak prompts paired with compliance response exhibit similar patterns on certain safety-critical parameters. In contrast, safe prompts lead to different gradient patterns. Building on this observation, GradSafe analyzes the gradients from prompts (paired with compliance responses) to accurately detect jailbreak prompts. We show that GradSafe, applied to Llama-2 without further training, outperforms Llama Guard{---}despite its extensive finetuning with a large dataset{---}in detecting jailbreak prompts. This superior performance is consistent across both zero-shot and adaptation scenarios, as evidenced by our evaluations on ToxicChat and XSTest. The source code is available at https://github.com/xyq7/GradSafe.",
}
| Large Language Models (LLMs) face threats from jailbreak prompts. Existing methods for detecting jailbreak prompts are primarily online moderation APIs or finetuned LLMs. These strategies, however, often require extensive and resource-intensive data collection and training processes. In this study, we propose GradSafe, which effectively detects jailbreak prompts by scrutinizing the gradients of safety-critical parameters in LLMs. Our method is grounded in a pivotal observation: the gradients of an LLM{'}s loss for jailbreak prompts paired with compliance response exhibit similar patterns on certain safety-critical parameters. In contrast, safe prompts lead to different gradient patterns. Building on this observation, GradSafe analyzes the gradients from prompts (paired with compliance responses) to accurately detect jailbreak prompts. We show that GradSafe, applied to Llama-2 without further training, outperforms Llama Guard{---}despite its extensive finetuning with a large dataset{---}in detecting jailbreak prompts. This superior performance is consistent across both zero-shot and adaptation scenarios, as evidenced by our evaluations on ToxicChat and XSTest. The source code is available at https://github.com/xyq7/GradSafe. | [
"Xie, Yueqi",
"Fang, Minghong",
"Pi, Renjie",
"Gong, Neil"
] | GradSafe: Detecting Jailbreak Prompts for LLMs via Safety-Critical Gradient Analysis | acl-long.30 | Poster | 2402.13494 | [
"https://github.com/xyq7/gradsafe"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.30/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.31.bib | @inproceedings{lee-etal-2024-pouring,
title = "Pouring Your Heart Out: Investigating the Role of Figurative Language in Online Expressions of Empathy",
author = "Lee, Gyeongeun and
Wong, Christina and
Guo, Meghan and
Parde, Natalie",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.31",
pages = "519--529",
abstract = "Empathy is a social mechanism used to support and strengthen emotional connection with others, including in online communities. However, little is currently known about the nature of these online expressions, nor the particular factors that may lead to their improved detection. In this work, we study the role of a specific and complex subcategory of linguistic phenomena, figurative language, in online expressions of empathy. Our extensive experiments reveal that incorporating features regarding the use of metaphor, idiom, and hyperbole into empathy detection models improves their performance, resulting in impressive maximum F1 scores of 0.942 and 0.809 for identifying posts without and with empathy, respectively.",
}
| Empathy is a social mechanism used to support and strengthen emotional connection with others, including in online communities. However, little is currently known about the nature of these online expressions, nor the particular factors that may lead to their improved detection. In this work, we study the role of a specific and complex subcategory of linguistic phenomena, figurative language, in online expressions of empathy. Our extensive experiments reveal that incorporating features regarding the use of metaphor, idiom, and hyperbole into empathy detection models improves their performance, resulting in impressive maximum F1 scores of 0.942 and 0.809 for identifying posts without and with empathy, respectively. | [
"Lee, Gyeongeun",
"Wong, Christina",
"Guo, Meghan",
"Parde, Natalie"
] | Pouring Your Heart Out: Investigating the Role of Figurative Language in Online Expressions of Empathy | acl-long.31 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.31/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.32.bib | @inproceedings{wang-etal-2024-information,
title = "An Information-Theoretic Approach to Analyze {NLP} Classification Tasks",
author = "Wang, Luran and
Gales, Mark and
Raina, Vatsal",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.32",
pages = "530--551",
abstract = "Understanding the contribution of the inputs on the output is useful across many tasks. This work provides an information-theoretic framework to analyse the influence of inputs for text classification tasks. Natural language processing (NLP) tasks take either a single or multiple text elements to predict an output variable. Each text element has two components: the semantic meaning and a linguistic realization. Multiple-choice reading comprehension (MCRC) and sentiment classification (SC) are selected to showcase the framework. For MCRC, it is found that the relative context influence on the output reduces on more challenging datasets. In particular, more challenging contexts allows greater variation in the question complexity. Hence, test creators need to carefully consider the choice of the context when designing multiple-choice questions for assessment. For SC, it is found the semantic meaning of the input dominates compared to its linguistic realization when determining the sentiment. The framework is made available at: https://github.com/WangLuran/nlp-element-influence.",
}
| Understanding the contribution of the inputs on the output is useful across many tasks. This work provides an information-theoretic framework to analyse the influence of inputs for text classification tasks. Natural language processing (NLP) tasks take either a single or multiple text elements to predict an output variable. Each text element has two components: the semantic meaning and a linguistic realization. Multiple-choice reading comprehension (MCRC) and sentiment classification (SC) are selected to showcase the framework. For MCRC, it is found that the relative context influence on the output reduces on more challenging datasets. In particular, more challenging contexts allows greater variation in the question complexity. Hence, test creators need to carefully consider the choice of the context when designing multiple-choice questions for assessment. For SC, it is found the semantic meaning of the input dominates compared to its linguistic realization when determining the sentiment. The framework is made available at: https://github.com/WangLuran/nlp-element-influence. | [
"Wang, Luran",
"Gales, Mark",
"Raina, Vatsal"
] | An Information-Theoretic Approach to Analyze NLP Classification Tasks | acl-long.32 | Poster | 2402.00978 | [
"https://github.com/wangluran/nlp-element-influence"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.32/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.33.bib | @inproceedings{zhang-etal-2024-model,
title = "Can Your Model Tell a Negation from an Implicature? Unravelling Challenges With Intent Encoders",
author = "Zhang, Yuwei and
Singh, Siffi and
Sengupta, Sailik and
Shalyminov, Igor and
Su, Hang and
Song, Hwanjun and
Mansour, Saab",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.33",
pages = "552--567",
abstract = "Conversational systems often rely on embedding models for intent classification and intent clustering tasks. The advent of Large Language Models (LLMs), which enable instructional embeddings allowing one to adjust semantics over the embedding space using prompts, are being viewed as a panacea for these downstream conversational tasks. However, traditional evaluation benchmarks rely solely on task metrics that don{'}t particularly measure gaps related to semantic understanding. Thus, we propose an intent semantic toolkit that gives a more holistic view of intent embedding models by considering three tasks{--} (1) intent classification, (2) intent clustering, and (3) a novel triplet task. The triplet task gauges the model{'}s understanding of two semantic concepts paramount in real-world conversational systems{--} negation and implicature. We observe that current embedding models fare poorly in semantic understanding of these concepts. To address this, we propose a pre-training approach to improve the embedding model by leveraging augmentation with data generated by an auto-regressive model and a contrastive loss term. Our approach improves the semantic understanding of the intent embedding model on the aforementioned linguistic dimensions while slightly effecting their performance on downstream task metrics.",
}
| Conversational systems often rely on embedding models for intent classification and intent clustering tasks. The advent of Large Language Models (LLMs), which enable instructional embeddings allowing one to adjust semantics over the embedding space using prompts, are being viewed as a panacea for these downstream conversational tasks. However, traditional evaluation benchmarks rely solely on task metrics that don{'}t particularly measure gaps related to semantic understanding. Thus, we propose an intent semantic toolkit that gives a more holistic view of intent embedding models by considering three tasks{--} (1) intent classification, (2) intent clustering, and (3) a novel triplet task. The triplet task gauges the model{'}s understanding of two semantic concepts paramount in real-world conversational systems{--} negation and implicature. We observe that current embedding models fare poorly in semantic understanding of these concepts. To address this, we propose a pre-training approach to improve the embedding model by leveraging augmentation with data generated by an auto-regressive model and a contrastive loss term. Our approach improves the semantic understanding of the intent embedding model on the aforementioned linguistic dimensions while slightly effecting their performance on downstream task metrics. | [
"Zhang, Yuwei",
"Singh, Siffi",
"Sengupta, Sailik",
"Shalyminov, Igor",
"Su, Hang",
"Song, Hwanjun",
"Mansour, Saab"
] | Can Your Model Tell a Negation from an Implicature? Unravelling Challenges With Intent Encoders | acl-long.33 | Poster | 2403.04314 | [
""
] | https://huggingface.co/papers/2403.04314 | 0 | 0 | 0 | 7 | https://aclanthology.org/2024.acl-long.33/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.34.bib | @inproceedings{he-etal-2024-wav2gloss,
title = "{W}av2{G}loss: Generating Interlinear Glossed Text from Speech",
author = "He, Taiqi and
Choi, Kwanghee and
Tjuatja, Lindia and
Robinson, Nathaniel and
Shi, Jiatong and
Watanabe, Shinji and
Neubig, Graham and
Mortensen, David and
Levin, Lori",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.34",
pages = "568--582",
abstract = "Thousands of the world{'}s languages are in danger of extinction{---}a tremendous threat to cultural identities and human language diversity. Interlinear Glossed Text (IGT) is a form of linguistic annotation that can support documentation and resource creation for these languages{'} communities. IGT typically consists of (1) transcriptions, (2) morphological segmentation, (3) glosses, and (4) free translations to a majority language. We propose Wav2Gloss: a task in which these four annotation components are extracted automatically from speech, and introduce the first dataset to this end, Fieldwork: a corpus of speech with all these annotations, derived from the work of field linguists, covering 37 languages, with standard formatting, and train/dev/test splits. We provide various baselines to lay the groundwork for future research on IGT generation from speech, such as end-to-end versus cascaded, monolingual versus multilingual, and single-task versus multi-task approaches.",
}
| Thousands of the world{'}s languages are in danger of extinction{---}a tremendous threat to cultural identities and human language diversity. Interlinear Glossed Text (IGT) is a form of linguistic annotation that can support documentation and resource creation for these languages{'} communities. IGT typically consists of (1) transcriptions, (2) morphological segmentation, (3) glosses, and (4) free translations to a majority language. We propose Wav2Gloss: a task in which these four annotation components are extracted automatically from speech, and introduce the first dataset to this end, Fieldwork: a corpus of speech with all these annotations, derived from the work of field linguists, covering 37 languages, with standard formatting, and train/dev/test splits. We provide various baselines to lay the groundwork for future research on IGT generation from speech, such as end-to-end versus cascaded, monolingual versus multilingual, and single-task versus multi-task approaches. | [
"He, Taiqi",
"Choi, Kwanghee",
"Tjuatja, Lindia",
"Robinson, Nathaniel",
"Shi, Jiatong",
"Watanabe, Shinji",
"Neubig, Graham",
"Mortensen, David",
"Levin, Lori"
] | Wav2Gloss: Generating Interlinear Glossed Text from Speech | acl-long.34 | Poster | 2403.13169 | [
""
] | https://huggingface.co/papers/2403.13169 | 0 | 0 | 0 | 9 | https://aclanthology.org/2024.acl-long.34/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.35.bib | @inproceedings{hu-etal-2024-leveraging,
title = "Leveraging Codebook Knowledge with {NLI} and {C}hat{GPT} for Zero-Shot Political Relation Classification",
author = "Hu, Yibo and
Skorupa Parolin, Erick and
Khan, Latifur and
Brandt, Patrick and
Osorio, Javier and
D{'}Orazio, Vito",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.35",
pages = "583--603",
abstract = "Is it possible accurately classify political relations within evolving event ontologies without extensive annotations? This study investigates zero-shot learning methods that use expert knowledge from existing annotation codebook, and evaluates the performance of advanced ChatGPT (GPT-3.5/4) and a natural language inference (NLI)-based model called ZSP. ChatGPT uses codebook{'}s labeled summaries as prompts, whereas ZSP breaks down the classification task into context, event mode, and class disambiguation to refine task-specific hypotheses. This decomposition enhances interpretability, efficiency, and adaptability to schema changes. The experiments reveal ChatGPT{'}s strengths and limitations, and crucially show ZSP{'}s outperformance of dictionary-based methods and its competitive edge over some supervised models. These findings affirm the value of ZSP for validating event records and advancing ontology development. Our study underscores the efficacy of leveraging transfer learning and existing domain expertise to enhance research efficiency and scalability.",
}
| Is it possible accurately classify political relations within evolving event ontologies without extensive annotations? This study investigates zero-shot learning methods that use expert knowledge from existing annotation codebook, and evaluates the performance of advanced ChatGPT (GPT-3.5/4) and a natural language inference (NLI)-based model called ZSP. ChatGPT uses codebook{'}s labeled summaries as prompts, whereas ZSP breaks down the classification task into context, event mode, and class disambiguation to refine task-specific hypotheses. This decomposition enhances interpretability, efficiency, and adaptability to schema changes. The experiments reveal ChatGPT{'}s strengths and limitations, and crucially show ZSP{'}s outperformance of dictionary-based methods and its competitive edge over some supervised models. These findings affirm the value of ZSP for validating event records and advancing ontology development. Our study underscores the efficacy of leveraging transfer learning and existing domain expertise to enhance research efficiency and scalability. | [
"Hu, Yibo",
"Skorupa Parolin, Erick",
"Khan, Latifur",
"Br",
"t, Patrick",
"Osorio, Javier",
"D{'}Orazio, Vito"
] | Leveraging Codebook Knowledge with NLI and ChatGPT for Zero-Shot Political Relation Classification | acl-long.35 | Poster | 2308.07876 | [
"https://github.com/snowood1/zero-shot-plover"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.35/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.36.bib | @inproceedings{xu-wang-2024-spor,
title = "{SPOR}: A Comprehensive and Practical Evaluation Method for Compositional Generalization in Data-to-Text Generation",
author = "Xu, Ziyao and
Wang, Houfeng",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.36",
pages = "604--621",
abstract = "Compositional generalization is an important ability of language models and has many different manifestations. For data-to-text generation, previous research on this ability is limited to a single manifestation called Systematicity and lacks consideration of large language models (LLMs), which cannot fully cover practical application scenarios. In this work, we propose SPOR, a comprehensive and practical evaluation method for compositional generalization in data-to-text generation. SPOR includes four aspects of manifestations (Systematicity, Productivity, Order invariance, and Rule learnability) and allows high-quality evaluation without additional manual annotations based on existing datasets. We demonstrate SPOR on two different datasets and evaluate some existing language models including LLMs. We find that the models are deficient in various aspects of the evaluation and need further improvement. Our work shows the necessity for comprehensive research on different manifestations of compositional generalization in data-to-text generation and provides a framework for evaluation.",
}
| Compositional generalization is an important ability of language models and has many different manifestations. For data-to-text generation, previous research on this ability is limited to a single manifestation called Systematicity and lacks consideration of large language models (LLMs), which cannot fully cover practical application scenarios. In this work, we propose SPOR, a comprehensive and practical evaluation method for compositional generalization in data-to-text generation. SPOR includes four aspects of manifestations (Systematicity, Productivity, Order invariance, and Rule learnability) and allows high-quality evaluation without additional manual annotations based on existing datasets. We demonstrate SPOR on two different datasets and evaluate some existing language models including LLMs. We find that the models are deficient in various aspects of the evaluation and need further improvement. Our work shows the necessity for comprehensive research on different manifestations of compositional generalization in data-to-text generation and provides a framework for evaluation. | [
"Xu, Ziyao",
"Wang, Houfeng"
] | SPOR: A Comprehensive and Practical Evaluation Method for Compositional Generalization in Data-to-Text Generation | acl-long.36 | Poster | 2405.10650 | [
"https://github.com/xzy-xzy/spor"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.36/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.37.bib | @inproceedings{shi-etal-2024-opex,
title = "{OPE}x: A Component-Wise Analysis of {LLM}-Centric Agents in Embodied Instruction Following",
author = "Shi, Haochen and
Sun, Zhiyuan and
Yuan, Xingdi and
C{\^o}t{\'e}, Marc-Alexandre and
Liu, Bang",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.37",
pages = "622--636",
abstract = "Embodied Instruction Following (EIF) is a crucial task in embodied learning, requiring agents to interact with their environment through egocentric observations to fulfill natural language instructions. Recent advancements have seen a surge in employing large language models (LLMs) within a framework-centric approach to enhance performance in embodied learning tasks, including EIF. Despite these efforts, there exists a lack of a unified understanding regarding the impact of various components{---}ranging from visual perception to action execution{---}on task performance. To address this gap, we introduce OPEx, a comprehensive framework that delineates the core components essential for solving embodied learning tasks: Observer, Planner, and Executor. Through extensive evaluations, we provide a deep analysis of how each component influences EIF task performance. Furthermore, we innovate within this space by integrating a multi-agent design into the Planner component of our LLM-centric architecture, further enhancing task performance. Our findings reveal that LLM-centric design markedly improves EIF outcomes, identify visual perception and low-level action execution as critical bottlenecks, and demonstrate that augmenting LLMs with a multi-agent framework further elevates performance.",
}
| Embodied Instruction Following (EIF) is a crucial task in embodied learning, requiring agents to interact with their environment through egocentric observations to fulfill natural language instructions. Recent advancements have seen a surge in employing large language models (LLMs) within a framework-centric approach to enhance performance in embodied learning tasks, including EIF. Despite these efforts, there exists a lack of a unified understanding regarding the impact of various components{---}ranging from visual perception to action execution{---}on task performance. To address this gap, we introduce OPEx, a comprehensive framework that delineates the core components essential for solving embodied learning tasks: Observer, Planner, and Executor. Through extensive evaluations, we provide a deep analysis of how each component influences EIF task performance. Furthermore, we innovate within this space by integrating a multi-agent design into the Planner component of our LLM-centric architecture, further enhancing task performance. Our findings reveal that LLM-centric design markedly improves EIF outcomes, identify visual perception and low-level action execution as critical bottlenecks, and demonstrate that augmenting LLMs with a multi-agent framework further elevates performance. | [
"Shi, Haochen",
"Sun, Zhiyuan",
"Yuan, Xingdi",
"C{\\^o}t{\\'e}, Marc-Alex",
"re",
"Liu, Bang"
] | OPEx: A Component-Wise Analysis of LLM-Centric Agents in Embodied Instruction Following | acl-long.37 | Poster | 2403.03017 | [
""
] | https://huggingface.co/papers/2403.03017 | 1 | 0 | 0 | 5 | https://aclanthology.org/2024.acl-long.37/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.38.bib | @inproceedings{shen-etal-2024-multimodal,
title = "Multimodal Instruction Tuning with Conditional Mixture of {L}o{RA}",
author = "Shen, Ying and
Xu, Zhiyang and
Wang, Qifan and
Cheng, Yu and
Yin, Wenpeng and
Huang, Lifu",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.38",
pages = "637--648",
abstract = "Multimodal Large Language Models (MLLMs) have demonstrated remarkable proficiency in diverse tasks across different domains, with an increasing focus on improving their zero-shot generalization capabilities for unseen multimodal tasks. Multimodal instruction tuning has emerged as a successful strategy for achieving zero-shot generalization by fine-tuning pre-trained models on diverse multimodal tasks through instructions. As MLLMs grow in complexity and size, the need for parameter-efficient fine-tuning methods like Low-Rank Adaption (LoRA), which fine-tunes with a minimal set of parameters, becomes essential. However, applying LoRA in multimodal instruction tuning presents the challenge of task interference, which leads to performance degradation, especially when dealing with a broad array of multimodal tasks. To address this, this paper introduces a novel approach that integrates multimodal instruction tuning with Conditional Mixture-of-LoRA (MixLoRA). It innovates upon LoRA by dynamically constructing low-rank adaptation matrices tailored to the unique demands of each input instance, aiming to mitigate task interference. Experimental results on various multimodal evaluation datasets indicate that MixLoRA not only outperforms the conventional LoRA with the same or even higher ranks, demonstrating its efficacy and adaptability in diverse multimodal tasks.",
}
| Multimodal Large Language Models (MLLMs) have demonstrated remarkable proficiency in diverse tasks across different domains, with an increasing focus on improving their zero-shot generalization capabilities for unseen multimodal tasks. Multimodal instruction tuning has emerged as a successful strategy for achieving zero-shot generalization by fine-tuning pre-trained models on diverse multimodal tasks through instructions. As MLLMs grow in complexity and size, the need for parameter-efficient fine-tuning methods like Low-Rank Adaption (LoRA), which fine-tunes with a minimal set of parameters, becomes essential. However, applying LoRA in multimodal instruction tuning presents the challenge of task interference, which leads to performance degradation, especially when dealing with a broad array of multimodal tasks. To address this, this paper introduces a novel approach that integrates multimodal instruction tuning with Conditional Mixture-of-LoRA (MixLoRA). It innovates upon LoRA by dynamically constructing low-rank adaptation matrices tailored to the unique demands of each input instance, aiming to mitigate task interference. Experimental results on various multimodal evaluation datasets indicate that MixLoRA not only outperforms the conventional LoRA with the same or even higher ranks, demonstrating its efficacy and adaptability in diverse multimodal tasks. | [
"Shen, Ying",
"Xu, Zhiyang",
"Wang, Qifan",
"Cheng, Yu",
"Yin, Wenpeng",
"Huang, Lifu"
] | Multimodal Instruction Tuning with Conditional Mixture of LoRA | acl-long.38 | Poster | 2402.15896 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.38/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.39.bib | @inproceedings{xie-etal-2024-doclens,
title = "{D}oc{L}ens: Multi-aspect Fine-grained Medical Text Evaluation",
author = "Xie, Yiqing and
Zhang, Sheng and
Cheng, Hao and
Liu, Pengfei and
Gero, Zelalem and
Wong, Cliff and
Naumann, Tristan and
Poon, Hoifung and
Rose, Carolyn",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.39",
pages = "649--679",
abstract = "Medical text generation aims to assist with administrative work and highlight salient information to support decision-making.To reflect the specific requirements of medical text, in this paper, we propose a set of metrics to evaluate the completeness, conciseness, and attribution of the generated text at a fine-grained level. The metrics can be computed by various types of evaluators including instruction-following (both proprietary and open-source) and supervised entailment models. We demonstrate the effectiveness of the resulting framework, DocLens, with three evaluators on three tasks: clinical note generation, radiology report summarization, and patient question summarization. A comprehensive human study shows that DocLens exhibits substantially higher agreement with the judgments of medical experts than existing metrics. The results also highlight the need to improve open-source evaluators and suggest potential directions. We released the code at https://github.com/yiqingxyq/DocLens.",
}
| Medical text generation aims to assist with administrative work and highlight salient information to support decision-making.To reflect the specific requirements of medical text, in this paper, we propose a set of metrics to evaluate the completeness, conciseness, and attribution of the generated text at a fine-grained level. The metrics can be computed by various types of evaluators including instruction-following (both proprietary and open-source) and supervised entailment models. We demonstrate the effectiveness of the resulting framework, DocLens, with three evaluators on three tasks: clinical note generation, radiology report summarization, and patient question summarization. A comprehensive human study shows that DocLens exhibits substantially higher agreement with the judgments of medical experts than existing metrics. The results also highlight the need to improve open-source evaluators and suggest potential directions. We released the code at https://github.com/yiqingxyq/DocLens. | [
"Xie, Yiqing",
"Zhang, Sheng",
"Cheng, Hao",
"Liu, Pengfei",
"Gero, Zelalem",
"Wong, Cliff",
"Naumann, Tristan",
"Poon, Hoifung",
"Rose, Carolyn"
] | DocLens: Multi-aspect Fine-grained Medical Text Evaluation | acl-long.39 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.39/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.40.bib | @inproceedings{xia-etal-2024-fofo,
title = "{FOFO}: A Benchmark to Evaluate {LLM}s{'} Format-Following Capability",
author = "Xia, Congying and
Xing, Chen and
Du, Jiangshu and
Yang, Xinyi and
Feng, Yihao and
Xu, Ran and
Yin, Wenpeng and
Xiong, Caiming",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.40",
pages = "680--699",
abstract = "This paper presents FoFo, a pioneering benchmark for evaluating large language models{'} (LLMs) ability to follow complex, domain-specific formats, a crucial yet under-examined capability for their application as AI agents. Despite LLMs{'} advancements, existing benchmarks fail to assess their format-following proficiency adequately. FoFo fills this gap with a diverse range of real-world formats and instructions, developed through an AI-Human collaborative method. Our evaluation across both open-source (e.g., Llama 2, WizardLM) and closed-source (e.g., GPT-4, PALM2, Gemini) LLMs highlights three key findings: open-source models significantly lag behind closed-source ones in format adherence; LLMs{'} format-following performance is independent of their content generation quality; and LLMs{'} format proficiency varies across different domains. These insights suggest the need for specialized tuning for format-following skills and highlight FoFo{'}s role in guiding the selection of domain-specific AI agents. FoFo will be publicly released, contributing a critical tool for advancing LLM evaluation and application.",
}
| This paper presents FoFo, a pioneering benchmark for evaluating large language models{'} (LLMs) ability to follow complex, domain-specific formats, a crucial yet under-examined capability for their application as AI agents. Despite LLMs{'} advancements, existing benchmarks fail to assess their format-following proficiency adequately. FoFo fills this gap with a diverse range of real-world formats and instructions, developed through an AI-Human collaborative method. Our evaluation across both open-source (e.g., Llama 2, WizardLM) and closed-source (e.g., GPT-4, PALM2, Gemini) LLMs highlights three key findings: open-source models significantly lag behind closed-source ones in format adherence; LLMs{'} format-following performance is independent of their content generation quality; and LLMs{'} format proficiency varies across different domains. These insights suggest the need for specialized tuning for format-following skills and highlight FoFo{'}s role in guiding the selection of domain-specific AI agents. FoFo will be publicly released, contributing a critical tool for advancing LLM evaluation and application. | [
"Xia, Congying",
"Xing, Chen",
"Du, Jiangshu",
"Yang, Xinyi",
"Feng, Yihao",
"Xu, Ran",
"Yin, Wenpeng",
"Xiong, Caiming"
] | FOFO: A Benchmark to Evaluate LLMs' Format-Following Capability | acl-long.40 | Poster | 2402.18667 | [
"https://github.com/salesforceairesearch/fofo"
] | https://huggingface.co/papers/2402.18667 | 0 | 0 | 0 | 8 | https://aclanthology.org/2024.acl-long.40/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.41.bib | @inproceedings{yoo-etal-2024-hyper,
title = "Hyper-{CL}: Conditioning Sentence Representations with Hypernetworks",
author = "Yoo, Young and
Cha, Jii and
Kim, Changhyeon and
Kim, Taeuk",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.41",
pages = "700--711",
abstract = "While the introduction of contrastive learning frameworks in sentence representation learning has significantly contributed to advancements in the field, it still remains unclear whether state-of-the-art sentence embeddings can capture the fine-grained semantics of sentences, particularly when conditioned on specific perspectives.In this paper, we introduce Hyper-CL, an efficient methodology that integrates hypernetworks with contrastive learning to compute conditioned sentence representations.In our proposed approach, the hypernetwork is responsible for transforming pre-computed condition embeddings into corresponding projection layers. This enables the same sentence embeddings to be projected differently according to various conditions.Evaluation on two representative conditioning benchmarks, namely conditional semantic text similarity and knowledge graph completion, demonstrates that Hyper-CL is effective in flexibly conditioning sentence representations, showcasing its computational efficiency at the same time.We also provide a comprehensive analysis of the inner workings of our approach, leading to a better interpretation of its mechanisms.",
}
| While the introduction of contrastive learning frameworks in sentence representation learning has significantly contributed to advancements in the field, it still remains unclear whether state-of-the-art sentence embeddings can capture the fine-grained semantics of sentences, particularly when conditioned on specific perspectives.In this paper, we introduce Hyper-CL, an efficient methodology that integrates hypernetworks with contrastive learning to compute conditioned sentence representations.In our proposed approach, the hypernetwork is responsible for transforming pre-computed condition embeddings into corresponding projection layers. This enables the same sentence embeddings to be projected differently according to various conditions.Evaluation on two representative conditioning benchmarks, namely conditional semantic text similarity and knowledge graph completion, demonstrates that Hyper-CL is effective in flexibly conditioning sentence representations, showcasing its computational efficiency at the same time.We also provide a comprehensive analysis of the inner workings of our approach, leading to a better interpretation of its mechanisms. | [
"Yoo, Young",
"Cha, Jii",
"Kim, Changhyeon",
"Kim, Taeuk"
] | Hyper-CL: Conditioning Sentence Representations with Hypernetworks | acl-long.41 | Poster | 2403.09490 | [
"https://github.com/hyu-nlp/hyper-cl"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.41/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.42.bib | @inproceedings{lim-etal-2024-analysis,
title = "Analysis of Multi-Source Language Training in Cross-Lingual Transfer",
author = "Lim, Seonghoon and
Yun, Taejun and
Kim, Jinhyeon and
Choi, Jihun and
Kim, Taeuk",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.42",
pages = "712--725",
abstract = "The successful adaptation of multilingual language models (LMs) to a specific language-task pair critically depends on the availability of data tailored for that condition. While cross-lingual transfer (XLT) methods have contributed to addressing this data scarcity problem, there still exists ongoing debate about the mechanisms behind their effectiveness.In this work, we focus on one of promising assumptions about inner workings of XLT, that it encourages multilingual LMs to place greater emphasis on language-agnostic or task-specific features. We test this hypothesis by examining how the patterns of XLT change with a varying number of source languages involved in the process.Our experimental findings show that the use of multiple source languages in XLT-a technique we term Multi-Source Language Training (MSLT)-leads to increased mingling of embedding spaces for different languages, supporting the claim that XLT benefits from making use of language-independent information. On the other hand, we discover that using an arbitrary combination of source languages does not always guarantee better performance. We suggest simple heuristics for identifying effective language combinations for MSLT and empirically prove its effectiveness.",
}
| The successful adaptation of multilingual language models (LMs) to a specific language-task pair critically depends on the availability of data tailored for that condition. While cross-lingual transfer (XLT) methods have contributed to addressing this data scarcity problem, there still exists ongoing debate about the mechanisms behind their effectiveness.In this work, we focus on one of promising assumptions about inner workings of XLT, that it encourages multilingual LMs to place greater emphasis on language-agnostic or task-specific features. We test this hypothesis by examining how the patterns of XLT change with a varying number of source languages involved in the process.Our experimental findings show that the use of multiple source languages in XLT-a technique we term Multi-Source Language Training (MSLT)-leads to increased mingling of embedding spaces for different languages, supporting the claim that XLT benefits from making use of language-independent information. On the other hand, we discover that using an arbitrary combination of source languages does not always guarantee better performance. We suggest simple heuristics for identifying effective language combinations for MSLT and empirically prove its effectiveness. | [
"Lim, Seonghoon",
"Yun, Taejun",
"Kim, Jinhyeon",
"Choi, Jihun",
"Kim, Taeuk"
] | Analysis of Multi-Source Language Training in Cross-Lingual Transfer | acl-long.42 | Poster | 2402.13562 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.42/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.43.bib | @inproceedings{ghosh-etal-2024-abex,
title = "{ABEX}: Data Augmentation for Low-Resource {NLU} via Expanding Abstract Descriptions",
author = "Ghosh, Sreyan and
Tyagi, Utkarsh and
Kumar, Sonal and
Evuru, Chandra Kiran and
S, Ramaneswaran and
Sakshi, S and
Manocha, Dinesh",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.43",
pages = "726--748",
abstract = "We present ABEX, a novel and effective generative data augmentation methodology for low-resource Natural Language Understanding (NLU) tasks. ABEX is based on ABstract-and-EXpand, a novel paradigm for generating diverse forms of an input document {--} we first convert a document into its concise, abstract description and then generate new documents based on expanding the resultant abstraction. To learn the task of expanding abstract descriptions, we first train BART on a large-scale synthetic dataset with abstract-document pairs. Next, to generate abstract descriptions for a document, we propose a simple, controllable, and training-free method based on editing AMR graphs. ABEX brings the best of both worlds: by expanding from abstract representations, it preserves the original semantic properties of the documents, like style and meaning, thereby maintaining alignment with the original label and data distribution. At the same time, the fundamental process of elaborating on abstract descriptions facilitates diverse generations. We demonstrate the effectiveness of ABEX on 4 NLU tasks spanning 12 datasets and 4 low-resource settings. ABEX outperforms all our baselines qualitatively with improvements of 0.04{\%} - 38.8{\%}. Qualitatively, ABEX outperforms all prior methods from literature in terms of context and length diversity.",
}
| We present ABEX, a novel and effective generative data augmentation methodology for low-resource Natural Language Understanding (NLU) tasks. ABEX is based on ABstract-and-EXpand, a novel paradigm for generating diverse forms of an input document {--} we first convert a document into its concise, abstract description and then generate new documents based on expanding the resultant abstraction. To learn the task of expanding abstract descriptions, we first train BART on a large-scale synthetic dataset with abstract-document pairs. Next, to generate abstract descriptions for a document, we propose a simple, controllable, and training-free method based on editing AMR graphs. ABEX brings the best of both worlds: by expanding from abstract representations, it preserves the original semantic properties of the documents, like style and meaning, thereby maintaining alignment with the original label and data distribution. At the same time, the fundamental process of elaborating on abstract descriptions facilitates diverse generations. We demonstrate the effectiveness of ABEX on 4 NLU tasks spanning 12 datasets and 4 low-resource settings. ABEX outperforms all our baselines qualitatively with improvements of 0.04{\%} - 38.8{\%}. Qualitatively, ABEX outperforms all prior methods from literature in terms of context and length diversity. | [
"Ghosh, Sreyan",
"Tyagi, Utkarsh",
"Kumar, Sonal",
"Evuru, Ch",
"ra Kiran",
"S, Ramaneswaran",
"Sakshi, S",
"Manocha, Dinesh"
] | ABEX: Data Augmentation for Low-Resource NLU via Expanding Abstract Descriptions | acl-long.43 | Poster | 2406.04286 | [
"https://github.com/sreyan88/abex"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.43/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.44.bib | @inproceedings{bandarkar-etal-2024-belebele,
title = "The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants",
author = "Bandarkar, Lucas and
Liang, Davis and
Muller, Benjamin and
Artetxe, Mikel and
Shukla, Satya Narayan and
Husa, Donald and
Goyal, Naman and
Krishnan, Abhinandan and
Zettlemoyer, Luke and
Khabsa, Madian",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.44",
pages = "749--775",
abstract = "We present Belebele, a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. Significantly expanding the language coverage of natural language understanding (NLU) benchmarks, this dataset enables the evaluation of text models in high-, medium-, and low-resource languages. Each question is based on a short passage from the FLORES-200 dataset and has four multiple-choice answers. The questions were carefully curated to discriminate between models with different levels of general language comprehension. The English dataset on its own proves difficult enough to challenge state-of-the-art language models. Being fully parallel, this dataset enables direct comparison of model performance across all languages. We use this dataset to evaluate the capabilities of multilingual masked language models (MLMs) and large language models (LLMs). We present extensive results and findings, notably that despite significant cross-lingual transfer in English-centric LLMs, much smaller MLMs pretrained on balanced multilingual data still understand far more languages. Overall, Belebele opens up new avenues for evaluating and analyzing the multilingual capabilities of NLP systems.",
}
| We present Belebele, a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. Significantly expanding the language coverage of natural language understanding (NLU) benchmarks, this dataset enables the evaluation of text models in high-, medium-, and low-resource languages. Each question is based on a short passage from the FLORES-200 dataset and has four multiple-choice answers. The questions were carefully curated to discriminate between models with different levels of general language comprehension. The English dataset on its own proves difficult enough to challenge state-of-the-art language models. Being fully parallel, this dataset enables direct comparison of model performance across all languages. We use this dataset to evaluate the capabilities of multilingual masked language models (MLMs) and large language models (LLMs). We present extensive results and findings, notably that despite significant cross-lingual transfer in English-centric LLMs, much smaller MLMs pretrained on balanced multilingual data still understand far more languages. Overall, Belebele opens up new avenues for evaluating and analyzing the multilingual capabilities of NLP systems. | [
"B",
"arkar, Lucas",
"Liang, Davis",
"Muller, Benjamin",
"Artetxe, Mikel",
"Shukla, Satya Narayan",
"Husa, Donald",
"Goyal, Naman",
"Krishnan, Abhin",
"an",
"Zettlemoyer, Luke",
"Khabsa, Madian"
] | The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants | acl-long.44 | Poster | 2308.16884 | [
"https://github.com/facebookresearch/belebele"
] | https://huggingface.co/papers/2308.16884 | 4 | 8 | 0 | 10 | https://aclanthology.org/2024.acl-long.44/ | [
"ilsp/Meltemi-7B-v1",
"ilsp/Meltemi-7B-Instruct-v1",
"HiTZ/latxa-7b-v1",
"ilsp/Meltemi-7B-Instruct-v1.5",
"HiTZ/latxa-70b-v1",
"HiTZ/latxa-13b-v1",
"ilsp/Meltemi-7B-v1.5",
"HiTZ/latxa-7b-v1.1",
"SPAHE/Meltemi-7B-Instruct-v1-GGUF",
"HiTZ/latxa-13b-v1.1",
"HiTZ/latxa-70b-v1.1",
"HiTZ/latxa-13b-v1.2",
"HiTZ/latxa-7b-v1.2",
"HiTZ/latxa-70b-v1.2",
"RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf"
] | [
"SEACrowd/belebele",
"OALL/AlGhafa-Arabic-LLM-Benchmark-Native"
] | [
"teketen/idiazabal",
"SantiagoMoreno-UdeA/Latxa-demo"
] | 1 |
https://aclanthology.org/2024.acl-long.45.bib | @inproceedings{an-etal-2024-learn,
title = "Learn from Failure: Fine-tuning {LLM}s with Trial-and-Error Data for Intuitionistic Propositional Logic Proving",
author = "An, Chenyang and
Chen, Zhibo and
Ye, Qihao and
First, Emily and
Peng, Letian and
Zhang, Jiayun and
Wang, Zihan and
Lerner, Sorin and
Shang, Jingbo",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.45",
pages = "776--790",
abstract = "Recent advances in Automated Theorem Proving have shown the effectiveness of leveraging a (large) language model that generates tactics (i.e. proof steps) to search through proof states. The current model, while trained solely on successful proof paths, faces a discrepancy at the inference stage, as it must sample and try various tactics at each proof state until finding success, unlike its training which does not incorporate learning from failed attempts. Intuitively, a tactic that leads to a failed search path would indicate that similar tactics should receive less attention during the following trials. In this paper, we demonstrate the benefit of training models that additionally learn from failed search paths. Facing the lack of such trial-and-error data in existing open-source theorem-proving datasets, we curate a dataset on intuitionistic propositional logic theorems and formalize it in Lean, such that we can reliably check the correctness of proofs. We compare our model trained on relatively short trial-and-error information (TrialMaster) with models trained only on the correct paths and discover that the former solves more unseen theorems with lower trial searches.",
}
| Recent advances in Automated Theorem Proving have shown the effectiveness of leveraging a (large) language model that generates tactics (i.e. proof steps) to search through proof states. The current model, while trained solely on successful proof paths, faces a discrepancy at the inference stage, as it must sample and try various tactics at each proof state until finding success, unlike its training which does not incorporate learning from failed attempts. Intuitively, a tactic that leads to a failed search path would indicate that similar tactics should receive less attention during the following trials. In this paper, we demonstrate the benefit of training models that additionally learn from failed search paths. Facing the lack of such trial-and-error data in existing open-source theorem-proving datasets, we curate a dataset on intuitionistic propositional logic theorems and formalize it in Lean, such that we can reliably check the correctness of proofs. We compare our model trained on relatively short trial-and-error information (TrialMaster) with models trained only on the correct paths and discover that the former solves more unseen theorems with lower trial searches. | [
"An, Chenyang",
"Chen, Zhibo",
"Ye, Qihao",
"First, Emily",
"Peng, Letian",
"Zhang, Jiayun",
"Wang, Zihan",
"Lerner, Sorin",
"Shang, Jingbo"
] | Learn from Failure: Fine-tuning LLMs with Trial-and-Error Data for Intuitionistic Propositional Logic Proving | acl-long.45 | Poster | 2404.07382 | [
"https://github.com/ucsd-atp/propl"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.45/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.46.bib | @inproceedings{lee-etal-2024-interactive,
title = "Interactive Text-to-Image Retrieval with Large Language Models: A Plug-and-Play Approach",
author = "Lee, Saehyung and
Yu, Sangwon and
Park, Junsung and
Yi, Jihun and
Yoon, Sungroh",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.46",
pages = "791--809",
abstract = "In this paper, we primarily address the issue of dialogue-form context query within the interactive text-to-image retrieval task. Our methodology, PlugIR, actively utilizes the general instruction-following capability of LLMs in two ways. First, by reformulating the dialogue-form context, we eliminate the necessity of fine-tuning a retrieval model on existing visual dialogue data, thereby enabling the use of any arbitrary black-box model. Second, we construct the LLM questioner to generate non-redundant questions about the attributes of the target image, based on the information of retrieval candidate images in the current context. This approach mitigates the issues of noisiness and redundancy in the generated questions. Beyond our methodology, we propose a novel evaluation metric, Best log Rank Integral (BRI), for a comprehensive assessment of the interactive retrieval system. PlugIR demonstrates superior performance compared to both zero-shot and fine-tuned baselines in various benchmarks. Additionally, the two methodologies comprising PlugIR can be flexibly applied together or separately in various situations.",
}
| In this paper, we primarily address the issue of dialogue-form context query within the interactive text-to-image retrieval task. Our methodology, PlugIR, actively utilizes the general instruction-following capability of LLMs in two ways. First, by reformulating the dialogue-form context, we eliminate the necessity of fine-tuning a retrieval model on existing visual dialogue data, thereby enabling the use of any arbitrary black-box model. Second, we construct the LLM questioner to generate non-redundant questions about the attributes of the target image, based on the information of retrieval candidate images in the current context. This approach mitigates the issues of noisiness and redundancy in the generated questions. Beyond our methodology, we propose a novel evaluation metric, Best log Rank Integral (BRI), for a comprehensive assessment of the interactive retrieval system. PlugIR demonstrates superior performance compared to both zero-shot and fine-tuned baselines in various benchmarks. Additionally, the two methodologies comprising PlugIR can be flexibly applied together or separately in various situations. | [
"Lee, Saehyung",
"Yu, Sangwon",
"Park, Junsung",
"Yi, Jihun",
"Yoon, Sungroh"
] | Interactive Text-to-Image Retrieval with Large Language Models: A Plug-and-Play Approach | acl-long.46 | Oral | 2406.03411 | [
"https://github.com/saehyung-lee/plugir"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.46/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.47.bib | @inproceedings{lin-etal-2024-imbue,
title = "{IMBUE}: Improving Interpersonal Effectiveness through Simulation and Just-in-time Feedback with Human-Language Model Interaction",
author = "Lin, Inna and
Sharma, Ashish and
Rytting, Christopher and
Miner, Adam and
Suh, Jina and
Althoff, Tim",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.47",
pages = "810--840",
abstract = "Navigating certain communication situations can be challenging due to individuals{'} lack of skills and the interference of strong emotions. However, effective learning opportunities are rarely accessible. In this work, we conduct a human-centered study that uses language models to simulate bespoke communication training and provide just-in-time feedback to support the practice and learning of interpersonal effectiveness skills. We apply the interpersonal effectiveness framework from Dialectical Behavioral Therapy (DBT), DEAR MAN, which focuses on both conversational and emotional skills. We present IMBUE, an interactive training system that provides feedback 28{\%} more similar to experts{'} feedback, compared to that generated by GPT-4. IMBUE is the first to focus on communication skills and emotion management simultaneously, incorporate experts{'} domain knowledge in providing feedback, and be grounded in psychology theory. Through a randomized trial of 86 participants, we find that IMBUE{'}s simulation-only variant significantly improves participants{'} self-efficacy (up to 17{\%}) and reduces negative emotions (up to 25{\%}). With IMBUE{'}s additional just-in-time feedback, participants demonstrate 17{\%} improvement in skill mastery, along with greater enhancements in self-efficacy (27{\%} more) and reduction of negative emotions (16{\%} more) compared to simulation-only. The improvement in skill mastery is the only measure that is transferred to new and more difficult situations; situation-specific training is necessary for improving self-efficacy and emotion reduction.",
}
| Navigating certain communication situations can be challenging due to individuals{'} lack of skills and the interference of strong emotions. However, effective learning opportunities are rarely accessible. In this work, we conduct a human-centered study that uses language models to simulate bespoke communication training and provide just-in-time feedback to support the practice and learning of interpersonal effectiveness skills. We apply the interpersonal effectiveness framework from Dialectical Behavioral Therapy (DBT), DEAR MAN, which focuses on both conversational and emotional skills. We present IMBUE, an interactive training system that provides feedback 28{\%} more similar to experts{'} feedback, compared to that generated by GPT-4. IMBUE is the first to focus on communication skills and emotion management simultaneously, incorporate experts{'} domain knowledge in providing feedback, and be grounded in psychology theory. Through a randomized trial of 86 participants, we find that IMBUE{'}s simulation-only variant significantly improves participants{'} self-efficacy (up to 17{\%}) and reduces negative emotions (up to 25{\%}). With IMBUE{'}s additional just-in-time feedback, participants demonstrate 17{\%} improvement in skill mastery, along with greater enhancements in self-efficacy (27{\%} more) and reduction of negative emotions (16{\%} more) compared to simulation-only. The improvement in skill mastery is the only measure that is transferred to new and more difficult situations; situation-specific training is necessary for improving self-efficacy and emotion reduction. | [
"Lin, Inna",
"Sharma, Ashish",
"Rytting, Christopher",
"Miner, Adam",
"Suh, Jina",
"Althoff, Tim"
] | IMBUE: Improving Interpersonal Effectiveness through Simulation and Just-in-time Feedback with Human-Language Model Interaction | acl-long.47 | Poster | 2402.12556 | [
""
] | https://huggingface.co/papers/2402.12556 | 0 | 0 | 0 | 6 | https://aclanthology.org/2024.acl-long.47/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.48.bib | @inproceedings{lin-etal-2024-token,
title = "Token-wise Influential Training Data Retrieval for Large Language Models",
author = "Lin, Huawei and
Long, Jikai and
Xu, Zhaozhuo and
Zhao, Weijie",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.48",
pages = "841--860",
abstract = "Given a Large Language Model (LLM) generation, how can we identify which training data led to this generation? In this paper, we proposed RapidIn, a scalable framework adapting to LLMs for estimating the influence of each training data. The proposed framework consists of two stages: caching and retrieval. First, we compress the gradient vectors by over 200,000x, allowing them to be cached on disk or in GPU/CPU memory. Then, given a generation, RapidIn efficiently traverses the cached gradients to estimate the influence within minutes, achieving over a 6,326x speedup. Moreover, RapidIn supports multi-GPU parallelization to substantially accelerate caching and retrieval. Our empirical result confirms the efficiency and effectiveness of RapidIn.",
}
| Given a Large Language Model (LLM) generation, how can we identify which training data led to this generation? In this paper, we proposed RapidIn, a scalable framework adapting to LLMs for estimating the influence of each training data. The proposed framework consists of two stages: caching and retrieval. First, we compress the gradient vectors by over 200,000x, allowing them to be cached on disk or in GPU/CPU memory. Then, given a generation, RapidIn efficiently traverses the cached gradients to estimate the influence within minutes, achieving over a 6,326x speedup. Moreover, RapidIn supports multi-GPU parallelization to substantially accelerate caching and retrieval. Our empirical result confirms the efficiency and effectiveness of RapidIn. | [
"Lin, Huawei",
"Long, Jikai",
"Xu, Zhaozhuo",
"Zhao, Weijie"
] | Token-wise Influential Training Data Retrieval for Large Language Models | acl-long.48 | Poster | 2405.11724 | [
"https://github.com/huawei-lin/rapidin"
] | https://huggingface.co/papers/2405.11724 | 0 | 0 | 0 | 4 | https://aclanthology.org/2024.acl-long.48/ | [
"huaweilin/rapidin-alpaca-llama2-7b"
] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.49.bib | @inproceedings{weinzierl-harabagiu-2024-tree,
title = "Tree-of-Counterfactual Prompting for Zero-Shot Stance Detection",
author = "Weinzierl, Maxwell and
Harabagiu, Sanda",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.49",
pages = "861--880",
abstract = "Stance detection enables the inference of attitudes from human communications. Automatic stance identification was mostly cast as a classification problem. However, stance decisions involve complex judgments, which can be nowadays generated by prompting Large Language Models (LLMs). In this paper we present a new method for stance identification which (1) relies on a new prompting framework, called Tree-of-Counterfactual prompting; (2) operates not only on textual communications, but also on images; (3) allows more than one stance object type; and (4) requires no examples of stance attribution, thus it is a {``}Tabula Rasa{''} Zero-Shot Stance Detection (TR-ZSSD) method. Our experiments indicate surprisingly promising results, outperforming fine-tuned stance detection systems.",
}
| Stance detection enables the inference of attitudes from human communications. Automatic stance identification was mostly cast as a classification problem. However, stance decisions involve complex judgments, which can be nowadays generated by prompting Large Language Models (LLMs). In this paper we present a new method for stance identification which (1) relies on a new prompting framework, called Tree-of-Counterfactual prompting; (2) operates not only on textual communications, but also on images; (3) allows more than one stance object type; and (4) requires no examples of stance attribution, thus it is a {``}Tabula Rasa{''} Zero-Shot Stance Detection (TR-ZSSD) method. Our experiments indicate surprisingly promising results, outperforming fine-tuned stance detection systems. | [
"Weinzierl, Maxwell",
"Harabagiu, S",
"a"
] | Tree-of-Counterfactual Prompting for Zero-Shot Stance Detection | acl-long.49 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.49/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.50.bib | @inproceedings{koh-etal-2024-visualwebarena,
title = "{V}isual{W}eb{A}rena: Evaluating Multimodal Agents on Realistic Visual Web Tasks",
author = "Koh, Jing Yu and
Lo, Robert and
Jang, Lawrence and
Duvvur, Vikram and
Lim, Ming and
Huang, Po-Yu and
Neubig, Graham and
Zhou, Shuyan and
Salakhutdinov, Russ and
Fried, Daniel",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.50",
pages = "881--905",
abstract = "Autonomous agents capable of planning, reasoning, and executing actions on the web offer a promising avenue for automating computer tasks. However, the majority of existing benchmarks primarily focus on text-based agents, neglecting many natural tasks that require visual information to effectively solve. Given that most computer interfaces cater to human perception, visual information often augments textual data in ways that text-only models struggle to harness effectively. To bridge this gap, we introduce VisualWebArena, a benchmark designed to assess the performance of multimodal web agents on *realistic visually grounded tasks*. VisualWebArena comprises of a set of diverse and complex web-based tasks that evaluate various capabilities of autonomous multimodal agents. To perform on this benchmark, agents need to accurately process image-text inputs, interpret natural language instructions, and execute actions on websites to accomplish user-defined objectives. We conduct an extensive evaluation of state-of-the-art LLM-based autonomous agents, including several multimodal models. Through extensive quantitative and qualitative analysis, we identify several limitations of text-only LLM agents, and reveal gaps in the capabilities of state-of-the-art multimodal language agents. VisualWebArena provides a framework for evaluating multimodal autonomous language agents, and offers insights towards building stronger autonomous agents for the web.",
}
| Autonomous agents capable of planning, reasoning, and executing actions on the web offer a promising avenue for automating computer tasks. However, the majority of existing benchmarks primarily focus on text-based agents, neglecting many natural tasks that require visual information to effectively solve. Given that most computer interfaces cater to human perception, visual information often augments textual data in ways that text-only models struggle to harness effectively. To bridge this gap, we introduce VisualWebArena, a benchmark designed to assess the performance of multimodal web agents on *realistic visually grounded tasks*. VisualWebArena comprises of a set of diverse and complex web-based tasks that evaluate various capabilities of autonomous multimodal agents. To perform on this benchmark, agents need to accurately process image-text inputs, interpret natural language instructions, and execute actions on websites to accomplish user-defined objectives. We conduct an extensive evaluation of state-of-the-art LLM-based autonomous agents, including several multimodal models. Through extensive quantitative and qualitative analysis, we identify several limitations of text-only LLM agents, and reveal gaps in the capabilities of state-of-the-art multimodal language agents. VisualWebArena provides a framework for evaluating multimodal autonomous language agents, and offers insights towards building stronger autonomous agents for the web. | [
"Koh, Jing Yu",
"Lo, Robert",
"Jang, Lawrence",
"Duvvur, Vikram",
"Lim, Ming",
"Huang, Po-Yu",
"Neubig, Graham",
"Zhou, Shuyan",
"Salakhutdinov, Russ",
"Fried, Daniel"
] | VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks | acl-long.50 | Poster | 2401.13649 | [
"https://github.com/web-arena-x/visualwebarena"
] | https://huggingface.co/papers/2401.13649 | 0 | 1 | 0 | 10 | https://aclanthology.org/2024.acl-long.50/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.51.bib | @inproceedings{song-etal-2024-finesure,
title = "{F}ine{S}ur{E}: Fine-grained Summarization Evaluation using {LLM}s",
author = "Song, Hwanjun and
Su, Hang and
Shalyminov, Igor and
Cai, Jason and
Mansour, Saab",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.51",
pages = "906--922",
abstract = "Automated evaluation is crucial for streamlining text summarization benchmarking and model development, given the costly and time-consuming nature of human evaluation. Traditional methods like ROUGE do not correlate well with human judgment, while recently proposed LLM-based metrics provide only summary-level assessment using Likert-scale scores. This limits deeper model analysis, e.g., we can only assign one hallucination score at the summary level, while at the sentence level, we can count sentences containing hallucinations. To remedy those limitations, we propose FineSurE, a fine-grained evaluator specifically tailored for the summarization task using large language models (LLMs). It also employs completeness and conciseness criteria, in addition to faithfulness, enabling multi-dimensional assessment. We compare various open-source and proprietary LLMs as backbones for FineSurE. In addition, we conduct extensive benchmarking of FineSurE against SOTA methods including NLI-, QA-, and LLM-based methods, showing improved performance especially on the completeness and conciseness dimensions. The code is available at https://github.com/DISL-Lab/FineSurE.",
}
| Automated evaluation is crucial for streamlining text summarization benchmarking and model development, given the costly and time-consuming nature of human evaluation. Traditional methods like ROUGE do not correlate well with human judgment, while recently proposed LLM-based metrics provide only summary-level assessment using Likert-scale scores. This limits deeper model analysis, e.g., we can only assign one hallucination score at the summary level, while at the sentence level, we can count sentences containing hallucinations. To remedy those limitations, we propose FineSurE, a fine-grained evaluator specifically tailored for the summarization task using large language models (LLMs). It also employs completeness and conciseness criteria, in addition to faithfulness, enabling multi-dimensional assessment. We compare various open-source and proprietary LLMs as backbones for FineSurE. In addition, we conduct extensive benchmarking of FineSurE against SOTA methods including NLI-, QA-, and LLM-based methods, showing improved performance especially on the completeness and conciseness dimensions. The code is available at https://github.com/DISL-Lab/FineSurE. | [
"Song, Hwanjun",
"Su, Hang",
"Shalyminov, Igor",
"Cai, Jason",
"Mansour, Saab"
] | FineSurE: Fine-grained Summarization Evaluation using LLMs | acl-long.51 | Poster | 2407.00908 | [
"https://github.com/disl-lab/finesure-acl24"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.51/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.52.bib | @inproceedings{ahn-etal-2024-tuning,
title = "Tuning Large Multimodal Models for Videos using Reinforcement Learning from {AI} Feedback",
author = "Ahn, Daechul and
Choi, Yura and
Yu, Youngjae and
Kang, Dongyeop and
Choi, Jonghyun",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.52",
pages = "923--940",
abstract = "Recent advancements in large language models have influenced the development of video large multimodal models (VLMMs). Previous approaches for VLMMs involve Supervised Fine-Tuning (SFT) with instruction-tuned datasets, integrating LLM with visual encoders, and additional learnable parameters. Here, aligning video with text, and vice versa, remains a challenge, primarily due to the insufficient quality and quantity of multimodal instruction-tune data compared to that of text-only. This discrepancy often results in alignments that poorly ground the video content. To address this, we present a novel alignment strategy that employs a multimodal AI system equipped with Reinforcement Learning from AI Feedback (RLAIF), providing self-preference feedback to refine itself and facilitating the alignment of video and text modalities. Our approach uniquely integrates detailed video descriptions as context into a multimodal AI system during the preference feedback generation to enrich the understanding of video content, a process we call context-aware reward modeling. Empirical evaluations on various video benchmarks demonstrate that our VLM-RLAIF outperforms existing approaches, including the SFT model. We commit to open-sourcing our code, models, and datasets to foster further research in this area.",
}
| Recent advancements in large language models have influenced the development of video large multimodal models (VLMMs). Previous approaches for VLMMs involve Supervised Fine-Tuning (SFT) with instruction-tuned datasets, integrating LLM with visual encoders, and additional learnable parameters. Here, aligning video with text, and vice versa, remains a challenge, primarily due to the insufficient quality and quantity of multimodal instruction-tune data compared to that of text-only. This discrepancy often results in alignments that poorly ground the video content. To address this, we present a novel alignment strategy that employs a multimodal AI system equipped with Reinforcement Learning from AI Feedback (RLAIF), providing self-preference feedback to refine itself and facilitating the alignment of video and text modalities. Our approach uniquely integrates detailed video descriptions as context into a multimodal AI system during the preference feedback generation to enrich the understanding of video content, a process we call context-aware reward modeling. Empirical evaluations on various video benchmarks demonstrate that our VLM-RLAIF outperforms existing approaches, including the SFT model. We commit to open-sourcing our code, models, and datasets to foster further research in this area. | [
"Ahn, Daechul",
"Choi, Yura",
"Yu, Youngjae",
"Kang, Dongyeop",
"Choi, Jonghyun"
] | Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback | acl-long.52 | Oral | 2402.03746 | [
"https://github.com/yonseivnl/vlm-rlaif"
] | https://huggingface.co/papers/2402.03746 | 3 | 0 | 0 | 5 | https://aclanthology.org/2024.acl-long.52/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.53.bib | @inproceedings{zhan-etal-2024-prompt,
title = "Prompt Refinement with Image Pivot for Text-to-Image Generation",
author = "Zhan, Jingtao and
Ai, Qingyao and
Liu, Yiqun and
Pan, Yingwei and
Yao, Ting and
Mao, Jiaxin and
Ma, Shaoping and
Mei, Tao",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.53",
pages = "941--954",
abstract = "For text-to-image generation, automatically refining user-provided natural language prompts into the keyword-enriched prompts favored by systems is essential for the user experience. Such a prompt refinement process is analogous to translating the prompt from {``}user languages{''} into {``}system languages{''}. However, the scarcity of such parallel corpora makes it difficult to train a prompt refinement model. Inspired by zero-shot machine translation techniques, we introduce Prompt Refinement with Image Pivot (PRIP). PRIP innovatively uses the latent representation of a user-preferred image as an intermediary {``}pivot{''} between the user and system languages. It decomposes the refinement process into two data-rich tasks: inferring representations of user-preferred images from user languages and subsequently translating image representations into system languages. Thus, it can leverage abundant data for training. Extensive experiments show that PRIP substantially outperforms a wide range of baselines and effectively transfers to unseen systems in a zero-shot manner.",
}
| For text-to-image generation, automatically refining user-provided natural language prompts into the keyword-enriched prompts favored by systems is essential for the user experience. Such a prompt refinement process is analogous to translating the prompt from {``}user languages{''} into {``}system languages{''}. However, the scarcity of such parallel corpora makes it difficult to train a prompt refinement model. Inspired by zero-shot machine translation techniques, we introduce Prompt Refinement with Image Pivot (PRIP). PRIP innovatively uses the latent representation of a user-preferred image as an intermediary {``}pivot{''} between the user and system languages. It decomposes the refinement process into two data-rich tasks: inferring representations of user-preferred images from user languages and subsequently translating image representations into system languages. Thus, it can leverage abundant data for training. Extensive experiments show that PRIP substantially outperforms a wide range of baselines and effectively transfers to unseen systems in a zero-shot manner. | [
"Zhan, Jingtao",
"Ai, Qingyao",
"Liu, Yiqun",
"Pan, Yingwei",
"Yao, Ting",
"Mao, Jiaxin",
"Ma, Shaoping",
"Mei, Tao"
] | Prompt Refinement with Image Pivot for Text-to-Image Generation | acl-long.53 | Poster | 2407.00247 | [
"https://github.com/jingtaozhan/promptreformulate"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.53/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.54.bib | @inproceedings{mita-etal-2024-striking,
title = "Striking Gold in Advertising: Standardization and Exploration of Ad Text Generation",
author = "Mita, Masato and
Murakami, Soichiro and
Kato, Akihiko and
Zhang, Peinan",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.54",
pages = "955--972",
abstract = "In response to the limitations of manual ad creation, significant research has been conducted in the field of automatic ad text generation (ATG). However, the lack of comprehensive benchmarks and well-defined problem sets has made comparing different methods challenging. To tackle these challenges, we standardize the task of ATG and propose a first benchmark dataset, CAMERA, carefully designed and enabling the utilization of multi-modal information and facilitating industry-wise evaluations. Our extensive experiments with a variety of nine baselines, from classical methods to state-of-the-art models including large language models (LLMs), show the current state and the remaining challenges. We also explore how existing metrics in ATG and an LLM-based evaluator align with human evaluations.",
}
| In response to the limitations of manual ad creation, significant research has been conducted in the field of automatic ad text generation (ATG). However, the lack of comprehensive benchmarks and well-defined problem sets has made comparing different methods challenging. To tackle these challenges, we standardize the task of ATG and propose a first benchmark dataset, CAMERA, carefully designed and enabling the utilization of multi-modal information and facilitating industry-wise evaluations. Our extensive experiments with a variety of nine baselines, from classical methods to state-of-the-art models including large language models (LLMs), show the current state and the remaining challenges. We also explore how existing metrics in ATG and an LLM-based evaluator align with human evaluations. | [
"Mita, Masato",
"Murakami, Soichiro",
"Kato, Akihiko",
"Zhang, Peinan"
] | Striking Gold in Advertising: Standardization and Exploration of Ad Text Generation | acl-long.54 | Poster | 2309.12030 | [
"https://github.com/cyberagentailab/camera"
] | https://huggingface.co/papers/2309.12030 | 0 | 0 | 0 | 4 | https://aclanthology.org/2024.acl-long.54/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.55.bib | @inproceedings{wang-etal-2024-absinstruct,
title = "{A}bs{I}nstruct: Eliciting Abstraction Ability from {LLM}s through Explanation Tuning with Plausibility Estimation",
author = "Wang, Zhaowei and
Fan, Wei and
Zong, Qing and
Zhang, Hongming and
Choi, Sehyun and
Fang, Tianqing and
Liu, Xin and
Song, Yangqiu and
Wong, Ginny and
See, Simon",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.55",
pages = "973--994",
abstract = "Abstraction ability is crucial in human intelligence, which can also benefit various tasks in NLP study. Existing work shows that LLMs are deficient in abstract ability, and how to improve it remains unexplored. In this work, we design the framework AbsInstruct to enhance LLMs{'} abstraction ability through instruction tuning. The framework builds instructions with in-depth explanations to assist LLMs in capturing the underlying rationale of abstraction. Meanwhile, we introduce a plausibility estimator to select instructions that are more consistent with the abstraction knowledge of LLMs to be aligned. Then, our framework combines abstraction instructions with general-purpose ones to build a hybrid dataset. Extensive experiments and analyses demonstrate that our framework can considerably enhance LLMs{'} abstraction ability with strong generalization performance while maintaining their general instruction-following abilities.",
}
| Abstraction ability is crucial in human intelligence, which can also benefit various tasks in NLP study. Existing work shows that LLMs are deficient in abstract ability, and how to improve it remains unexplored. In this work, we design the framework AbsInstruct to enhance LLMs{'} abstraction ability through instruction tuning. The framework builds instructions with in-depth explanations to assist LLMs in capturing the underlying rationale of abstraction. Meanwhile, we introduce a plausibility estimator to select instructions that are more consistent with the abstraction knowledge of LLMs to be aligned. Then, our framework combines abstraction instructions with general-purpose ones to build a hybrid dataset. Extensive experiments and analyses demonstrate that our framework can considerably enhance LLMs{'} abstraction ability with strong generalization performance while maintaining their general instruction-following abilities. | [
"Wang, Zhaowei",
"Fan, Wei",
"Zong, Qing",
"Zhang, Hongming",
"Choi, Sehyun",
"Fang, Tianqing",
"Liu, Xin",
"Song, Yangqiu",
"Wong, Ginny",
"See, Simon"
] | AbsInstruct: Eliciting Abstraction Ability from LLMs through Explanation Tuning with Plausibility Estimation | acl-long.55 | Poster | 2402.10646 | [
"https://github.com/hkust-knowcomp/absinstruct"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.55/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.56.bib | @inproceedings{zhou-etal-2024-reflect,
title = "Reflect-{RL}: Two-Player Online {RL} Fine-Tuning for {LM}s",
author = "Zhou, Runlong and
Du, Simon and
Li, Beibin",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.56",
pages = "995--1015",
abstract = "As language models (LMs) demonstrate their capabilities in various fields, their application to tasks requiring multi-round interactions has become increasingly popular. These tasks usually have complex dynamics, so supervised fine-tuning (SFT) on a limited offline dataset does not yield good performance. However, only a few works attempted to directly train the LMs within interactive decision-making environments. We aim to create an effective approach to fine-tune LMs with online reinforcement learning (RL) in these environments. We propose Reflect-RL, a two-player system to fine-tune an LM using SFT and online RL, where a frozen reflection model (player) assists the policy model (player). To generate data for the warm-up SFT stage, we use negative example generation to enhance the error-correction ability of the reflection model. Furthermore, we designed single-prompt action enumeration and applied curriculum learning to allow the policy model to learn more efficiently. Empirically, we verify that Reflect-RL outperforms SFT and online RL without reflection. Testing results indicate GPT-2 XL 1.56B fine-tuned with Reflect-RL outperforms larger open-source LMs, such as Mistral 7B. The benchmarks, dataset, and code involved in this work are publicly available: https://github.com/zhourunlong/Reflect-RL.",
}
| As language models (LMs) demonstrate their capabilities in various fields, their application to tasks requiring multi-round interactions has become increasingly popular. These tasks usually have complex dynamics, so supervised fine-tuning (SFT) on a limited offline dataset does not yield good performance. However, only a few works attempted to directly train the LMs within interactive decision-making environments. We aim to create an effective approach to fine-tune LMs with online reinforcement learning (RL) in these environments. We propose Reflect-RL, a two-player system to fine-tune an LM using SFT and online RL, where a frozen reflection model (player) assists the policy model (player). To generate data for the warm-up SFT stage, we use negative example generation to enhance the error-correction ability of the reflection model. Furthermore, we designed single-prompt action enumeration and applied curriculum learning to allow the policy model to learn more efficiently. Empirically, we verify that Reflect-RL outperforms SFT and online RL without reflection. Testing results indicate GPT-2 XL 1.56B fine-tuned with Reflect-RL outperforms larger open-source LMs, such as Mistral 7B. The benchmarks, dataset, and code involved in this work are publicly available: https://github.com/zhourunlong/Reflect-RL. | [
"Zhou, Runlong",
"Du, Simon",
"Li, Beibin"
] | Reflect-RL: Two-Player Online RL Fine-Tuning for LMs | acl-long.56 | Poster | 2402.12621 | [
"https://github.com/zhourunlong/reflect-rl"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.56/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.57.bib | @inproceedings{yang-etal-2024-chatgpts,
title = "Can {C}hat{GPT}{'}s Performance be Improved on Verb Metaphor Detection Tasks? Bootstrapping and Combining Tacit Knowledge",
author = "Yang, Cheng and
Chen, Puli and
Huang, Qingbao",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.57",
pages = "1016--1027",
abstract = "Metaphors detection, as an important task in the field of NLP, has been receiving sustained academic attention in recent years. Current researches focus supervised metaphors detection systems, which usually require large-scale, high-quality labeled data support. The emerge of large language models (e.g., ChatGPT) has made many NLP tasks (e.g., automatic summarization and dialogue systems) a qualitative leap. However, it is worth noting that the use of ChatGPT for unsupervised metaphors detection is often challenged with less-than-expected performance. Therefore, the aim of our work is to explore how to bootstrap and combine ChatGPT by detecting the most prevalent verb metaphors among metaphors. Our approach first utilizes ChatGPT to obtain literal collocations of target verbs and subject-object pairs of verbs in the text to be detected. Subsequently, these literal collocations and subject-object pairs are mapped to the same set of topics, and finally the verb metaphors are detected through the analysis of entailment relations. The experimental results show that our method achieves the best performance on the unsupervised verb metaphors detection task compared to existing unsupervised methods or direct prediction using ChatGPT. Our code is available at https://github.com/VILAN-Lab/Unsupervised-Metaphor-Detection.",
}
| Metaphors detection, as an important task in the field of NLP, has been receiving sustained academic attention in recent years. Current researches focus supervised metaphors detection systems, which usually require large-scale, high-quality labeled data support. The emerge of large language models (e.g., ChatGPT) has made many NLP tasks (e.g., automatic summarization and dialogue systems) a qualitative leap. However, it is worth noting that the use of ChatGPT for unsupervised metaphors detection is often challenged with less-than-expected performance. Therefore, the aim of our work is to explore how to bootstrap and combine ChatGPT by detecting the most prevalent verb metaphors among metaphors. Our approach first utilizes ChatGPT to obtain literal collocations of target verbs and subject-object pairs of verbs in the text to be detected. Subsequently, these literal collocations and subject-object pairs are mapped to the same set of topics, and finally the verb metaphors are detected through the analysis of entailment relations. The experimental results show that our method achieves the best performance on the unsupervised verb metaphors detection task compared to existing unsupervised methods or direct prediction using ChatGPT. Our code is available at https://github.com/VILAN-Lab/Unsupervised-Metaphor-Detection. | [
"Yang, Cheng",
"Chen, Puli",
"Huang, Qingbao"
] | Can ChatGPT's Performance be Improved on Verb Metaphor Detection Tasks? Bootstrapping and Combining Tacit Knowledge | acl-long.57 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.57/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.58.bib | @inproceedings{yang-etal-2024-self,
title = "Self-Distillation Bridges Distribution Gap in Language Model Fine-Tuning",
author = "Yang, Zhaorui and
Pang, Tianyu and
Feng, Haozhe and
Wang, Han and
Chen, Wei and
Zhu, Minfeng and
Liu, Qian",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.58",
pages = "1028--1043",
abstract = "The surge in Large Language Models (LLMs) has revolutionized natural language processing, but fine-tuning them for specific tasks often encounters challenges in balancing performance and preserving general instruction-following abilities. In this paper, we posit that the distribution gap between task datasets and the LLMs serves as the primary underlying cause. To address the problem, we introduce Self-Distillation Fine-Tuning (SDFT), a novel approach that bridges the distribution gap by guiding fine-tuning with a distilled dataset generated by the model itself to match its original distribution. Experimental results on the Llama-2-chat model across various benchmarks demonstrate that SDFT effectively mitigates catastrophic forgetting while achieving comparable or superior performance on downstream tasks compared to the vanilla fine-tuning. Moreover, SDFT demonstrates the potential to maintain the helpfulness and safety alignment of LLMs. Our code is available at https://github.com/sail-sg/sdft.",
}
| The surge in Large Language Models (LLMs) has revolutionized natural language processing, but fine-tuning them for specific tasks often encounters challenges in balancing performance and preserving general instruction-following abilities. In this paper, we posit that the distribution gap between task datasets and the LLMs serves as the primary underlying cause. To address the problem, we introduce Self-Distillation Fine-Tuning (SDFT), a novel approach that bridges the distribution gap by guiding fine-tuning with a distilled dataset generated by the model itself to match its original distribution. Experimental results on the Llama-2-chat model across various benchmarks demonstrate that SDFT effectively mitigates catastrophic forgetting while achieving comparable or superior performance on downstream tasks compared to the vanilla fine-tuning. Moreover, SDFT demonstrates the potential to maintain the helpfulness and safety alignment of LLMs. Our code is available at https://github.com/sail-sg/sdft. | [
"Yang, Zhaorui",
"Pang, Tianyu",
"Feng, Haozhe",
"Wang, Han",
"Chen, Wei",
"Zhu, Minfeng",
"Liu, Qian"
] | Self-Distillation Bridges Distribution Gap in Language Model Fine-Tuning | acl-long.58 | Poster | 2402.13669 | [
"https://github.com/sail-sg/sdft"
] | https://huggingface.co/papers/2402.13669 | 1 | 0 | 0 | 7 | https://aclanthology.org/2024.acl-long.58/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.59.bib | @inproceedings{zhu-etal-2024-information,
title = "An Information Bottleneck Perspective for Effective Noise Filtering on Retrieval-Augmented Generation",
author = "Zhu, Kun and
Feng, Xiaocheng and
Du, Xiyuan and
Gu, Yuxuan and
Yu, Weijiang and
Wang, Haotian and
Chen, Qianglong and
Chu, Zheng and
Chen, Jingchang and
Qin, Bing",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.59",
pages = "1044--1069",
abstract = "Retrieval-augmented generation integrates the capabilities of large language models with relevant information retrieved from an extensive corpus, yet encounters challenges when confronted with real-world noisy data. One recent solution is to train a filter module to find relevant content but only achieve suboptimal noise compression. In this paper, we propose to introduce the information bottleneck theory into retrieval-augmented generation. Our approach involves the filtration of noise by simultaneously maximizing the mutual information between compression and ground output, while minimizing the mutual information between compression and retrieved passage. In addition, we derive the formula of information bottleneck to facilitate its application in novel comprehensive evaluations, the selection of supervised fine-tuning data, and the construction of reinforcement learning rewards. Experimental results demonstrate that our approach achieves significant improvements across various question answering datasets, not only in terms of the correctness of answer generation but also in the conciseness with 2.5{\%} compression rate.",
}
| Retrieval-augmented generation integrates the capabilities of large language models with relevant information retrieved from an extensive corpus, yet encounters challenges when confronted with real-world noisy data. One recent solution is to train a filter module to find relevant content but only achieve suboptimal noise compression. In this paper, we propose to introduce the information bottleneck theory into retrieval-augmented generation. Our approach involves the filtration of noise by simultaneously maximizing the mutual information between compression and ground output, while minimizing the mutual information between compression and retrieved passage. In addition, we derive the formula of information bottleneck to facilitate its application in novel comprehensive evaluations, the selection of supervised fine-tuning data, and the construction of reinforcement learning rewards. Experimental results demonstrate that our approach achieves significant improvements across various question answering datasets, not only in terms of the correctness of answer generation but also in the conciseness with 2.5{\%} compression rate. | [
"Zhu, Kun",
"Feng, Xiaocheng",
"Du, Xiyuan",
"Gu, Yuxuan",
"Yu, Weijiang",
"Wang, Haotian",
"Chen, Qianglong",
"Chu, Zheng",
"Chen, Jingchang",
"Qin, Bing"
] | An Information Bottleneck Perspective for Effective Noise Filtering on Retrieval-Augmented Generation | acl-long.59 | Oral | 2406.01549 | [
"https://github.com/zhukun1020/noisefilter_ib"
] | https://huggingface.co/papers/2406.01549 | 0 | 0 | 0 | 10 | https://aclanthology.org/2024.acl-long.59/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.60.bib | @inproceedings{jiang-etal-2024-rora,
title = "{RORA}: Robust Free-Text Rationale Evaluation",
author = "Jiang, Zhengping and
Lu, Yining and
Chen, Hanjie and
Khashabi, Daniel and
Van Durme, Benjamin and
Liu, Anqi",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.60",
pages = "1070--1087",
abstract = "Free-text rationales play a pivotal role in explainable NLP, bridging the knowledge and reasoning gaps behind a model{'}s decision-making. However, due to the diversity of potential reasoning paths and a corresponding lack of definitive ground truth, their evaluation remains a challenge. Existing metrics rely on the degree to which a rationale \textit{supports} a target label, but we find these fall short in evaluating rationales that inadvertently \textit{leak the label}. To address this problem, we propose RORA, a RObust free-text RAtionale evaluation against label leakage. RORA quantifies the new information supplied by a rationale to justify the label. This is achieved by assessing the conditional $\mathcal{V}$-information (Hewitt et al., 2021) with a predictive family robust against leaky features that can be exploited by a small model. RORA consistently outperforms existing approaches in evaluating human-written, synthetic, or model-generated rationales, particularly demonstrating robustness against label leakage. We also show that RORA aligns well with human judgment, providing a more reliable and accurate measurement across diverse free-text rationales.",
}
| Free-text rationales play a pivotal role in explainable NLP, bridging the knowledge and reasoning gaps behind a model{'}s decision-making. However, due to the diversity of potential reasoning paths and a corresponding lack of definitive ground truth, their evaluation remains a challenge. Existing metrics rely on the degree to which a rationale \textit{supports} a target label, but we find these fall short in evaluating rationales that inadvertently \textit{leak the label}. To address this problem, we propose RORA, a RObust free-text RAtionale evaluation against label leakage. RORA quantifies the new information supplied by a rationale to justify the label. This is achieved by assessing the conditional $\mathcal{V}$-information (Hewitt et al., 2021) with a predictive family robust against leaky features that can be exploited by a small model. RORA consistently outperforms existing approaches in evaluating human-written, synthetic, or model-generated rationales, particularly demonstrating robustness against label leakage. We also show that RORA aligns well with human judgment, providing a more reliable and accurate measurement across diverse free-text rationales. | [
"Jiang, Zhengping",
"Lu, Yining",
"Chen, Hanjie",
"Khashabi, Daniel",
"Van Durme, Benjamin",
"Liu, Anqi"
] | RORA: Robust Free-Text Rationale Evaluation | acl-long.60 | Poster | 2402.18678 | [
"https://github.com/zipjiang/rora"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.60/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.61.bib | @inproceedings{qian-etal-2024-tell,
title = "Tell Me More! Towards Implicit User Intention Understanding of Language Model Driven Agents",
author = "Qian, Cheng and
He, Bingxiang and
Zhuang, Zhong and
Deng, Jia and
Qin, Yujia and
Cong, Xin and
Zhang, Zhong and
Zhou, Jie and
Lin, Yankai and
Liu, Zhiyuan and
Sun, Maosong",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.61",
pages = "1088--1113",
abstract = "Current language model-driven agents often lack mechanisms for effective user participation, which is crucial given the vagueness commonly found in user instructions. Although adept at devising strategies and performing tasks, these agents struggle with seeking clarification and grasping precise user intentions. To bridge this gap, we introduce Intention-in-Interaction (IN3), a novel benchmark designed to inspect users{'} implicit intentions through explicit queries. Next, we propose the incorporation of model experts as the upstream in agent designs to enhance user-agent interaction. Employing IN3, we empirically train Mistral-Interact, a powerful model that proactively assesses task vagueness, inquires about user intentions, and refines them into actionable goals before starting downstream agent task execution. Integrating it into the XAgent framework, we comprehensively evaluate the enhanced agent system regarding user instruction understanding and execution, revealing that our approach notably excels at identifying vague user tasks, recovering and summarizing critical missing information, setting precise and necessary agent execution goals, and minimizing redundant tool usage, thus boosting overall efficiency.",
}
| Current language model-driven agents often lack mechanisms for effective user participation, which is crucial given the vagueness commonly found in user instructions. Although adept at devising strategies and performing tasks, these agents struggle with seeking clarification and grasping precise user intentions. To bridge this gap, we introduce Intention-in-Interaction (IN3), a novel benchmark designed to inspect users{'} implicit intentions through explicit queries. Next, we propose the incorporation of model experts as the upstream in agent designs to enhance user-agent interaction. Employing IN3, we empirically train Mistral-Interact, a powerful model that proactively assesses task vagueness, inquires about user intentions, and refines them into actionable goals before starting downstream agent task execution. Integrating it into the XAgent framework, we comprehensively evaluate the enhanced agent system regarding user instruction understanding and execution, revealing that our approach notably excels at identifying vague user tasks, recovering and summarizing critical missing information, setting precise and necessary agent execution goals, and minimizing redundant tool usage, thus boosting overall efficiency. | [
"Qian, Cheng",
"He, Bingxiang",
"Zhuang, Zhong",
"Deng, Jia",
"Qin, Yujia",
"Cong, Xin",
"Zhang, Zhong",
"Zhou, Jie",
"Lin, Yankai",
"Liu, Zhiyuan",
"Sun, Maosong"
] | Tell Me More! Towards Implicit User Intention Understanding of Language Model Driven Agents | acl-long.61 | Poster | 2402.09205 | [
"https://github.com/hbx-hbx/mistral-interact"
] | https://huggingface.co/papers/2402.09205 | 0 | 0 | 0 | 11 | https://aclanthology.org/2024.acl-long.61/ | [
"hbx/Mistral-Interact"
] | [
"hbx/IN3",
"hbx/IN3-interaction"
] | [] | 1 |
https://aclanthology.org/2024.acl-long.62.bib | @inproceedings{wang-etal-2024-instructprotein,
title = "{I}nstruct{P}rotein: Aligning Human and Protein Language via Knowledge Instruction",
author = "Wang, Zeyuan and
Zhang, Qiang and
Ding, Keyan and
Qin, Ming and
Zhuang, Xiang and
Li, Xiaotong and
Chen, Huajun",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.62",
pages = "1114--1136",
abstract = "Large Language Models (LLMs) have revolutionized the field of natural language processing, but they fall short in comprehending biological sequences such as proteins. To address this challenge, we propose InstructProtein, an innovative LLM that possesses bidirectional generation capabilities in both human and protein languages: (i) taking a protein sequence as input to predict its textual function description and (ii) using natural language to prompt protein sequence generation. To achieve this, we first pre-train an LLM on both protein and natural language corpora, enabling it to comprehend individual languages. Then supervised instruction tuning is employed to facilitate the alignment of these two distinct languages. Herein, we introduce a knowledge graph-based instruction generation framework to construct a high-quality instruction dataset, addressing the annotation imbalance and the absence of instructional signals in the existing protein-text corpus. In particular, the instructions inherit the structural relations between proteins and function annotations in knowledge graphs, which empowers our model to engage in the causal modeling of protein functions, akin to the chain-of-thought processes in natural languages. Extensive experiments on bidirectional protein-text generation tasks show that InstructProtein outperforms state-of-the-art LLMs by a large margin.",
}
| Large Language Models (LLMs) have revolutionized the field of natural language processing, but they fall short in comprehending biological sequences such as proteins. To address this challenge, we propose InstructProtein, an innovative LLM that possesses bidirectional generation capabilities in both human and protein languages: (i) taking a protein sequence as input to predict its textual function description and (ii) using natural language to prompt protein sequence generation. To achieve this, we first pre-train an LLM on both protein and natural language corpora, enabling it to comprehend individual languages. Then supervised instruction tuning is employed to facilitate the alignment of these two distinct languages. Herein, we introduce a knowledge graph-based instruction generation framework to construct a high-quality instruction dataset, addressing the annotation imbalance and the absence of instructional signals in the existing protein-text corpus. In particular, the instructions inherit the structural relations between proteins and function annotations in knowledge graphs, which empowers our model to engage in the causal modeling of protein functions, akin to the chain-of-thought processes in natural languages. Extensive experiments on bidirectional protein-text generation tasks show that InstructProtein outperforms state-of-the-art LLMs by a large margin. | [
"Wang, Zeyuan",
"Zhang, Qiang",
"Ding, Keyan",
"Qin, Ming",
"Zhuang, Xiang",
"Li, Xiaotong",
"Chen, Huajun"
] | InstructProtein: Aligning Human and Protein Language via Knowledge Instruction | acl-long.62 | Poster | 2310.03269 | [
""
] | https://huggingface.co/papers/2310.03269 | 0 | 0 | 0 | 7 | https://aclanthology.org/2024.acl-long.62/ | [
"hicai-zju/InstructProtein"
] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.63.bib | @inproceedings{elangovan-etal-2024-considers,
title = "{C}on{S}i{DERS}-The-Human Evaluation Framework: Rethinking Human Evaluation for Generative Large Language Models",
author = "Elangovan, Aparna and
Liu, Ling and
Xu, Lei and
Bodapati, Sravan Babu and
Roth, Dan",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.63",
pages = "1137--1160",
abstract = "In this position paper, we argue that human evaluation of generative large language models (LLMs) should be a multidisciplinary undertaking that draws upon the insights from disciplines such as user experience research and human behavioral psychology to ensure that the experimental design and results are reliable. The conclusions from these evaluations, therefore, must consider factors such as usability, aesthetics and cognitive biases. We highlight how cognitive biases can conflate fluent information and truthfulness, and how cognitive uncertainty affects the reliability of rating scores such as Likert. Furthermore, the evaluation should differentiate the capabilities and weaknesses of increasingly powerful large language models - which requires effective test sets. Scalability of human evaluation is also crucial to wider adoption. Hence, to design an effective human evaluation system in the age of generative NLP we propose the ConSiDERS-The-Human evaluation framework consisting of 6 pillars - Consistency, Scoring Criteria, Differentiating, User Experience, Responsible, and Scalability.",
}
| In this position paper, we argue that human evaluation of generative large language models (LLMs) should be a multidisciplinary undertaking that draws upon the insights from disciplines such as user experience research and human behavioral psychology to ensure that the experimental design and results are reliable. The conclusions from these evaluations, therefore, must consider factors such as usability, aesthetics and cognitive biases. We highlight how cognitive biases can conflate fluent information and truthfulness, and how cognitive uncertainty affects the reliability of rating scores such as Likert. Furthermore, the evaluation should differentiate the capabilities and weaknesses of increasingly powerful large language models - which requires effective test sets. Scalability of human evaluation is also crucial to wider adoption. Hence, to design an effective human evaluation system in the age of generative NLP we propose the ConSiDERS-The-Human evaluation framework consisting of 6 pillars - Consistency, Scoring Criteria, Differentiating, User Experience, Responsible, and Scalability. | [
"Elangovan, Aparna",
"Liu, Ling",
"Xu, Lei",
"Bodapati, Sravan Babu",
"Roth, Dan"
] | ConSiDERS-The-Human Evaluation Framework: Rethinking Human Evaluation for Generative Large Language Models | acl-long.63 | Poster | 2405.18638 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.63/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.64.bib | @inproceedings{tu-etal-2024-linguistically,
title = "Linguistically Conditioned Semantic Textual Similarity",
author = "Tu, Jingxuan and
Xu, Keer and
Yue, Liulu and
Ye, Bingyang and
Rim, Kyeongmin and
Pustejovsky, James",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.64",
pages = "1161--1172",
abstract = "Semantic textual similarity (STS) is a fundamental NLP task that measures the semantic similarity between a pair of sentences. In order to reduce the inherent ambiguity posed from the sentences, a recent work called Conditional STS (C-STS) has been proposed to measure the sentences{'} similarity conditioned on a certain aspect. Despite the popularity of C-STS, we find that the current C-STS dataset suffers from various issues that could impede proper evaluation on this task. In this paper, we reannotate the C-STS validation set and observe an annotator discrepancy on 55{\%} of the instances resulting from the annotation errors in the original label, ill-defined conditions, and the lack of clarity in the task definition. After a thorough dataset analysis, we improve the C-STS task by leveraging the models{'} capability to understand the conditions under a QA task setting. With the generated answers, we present an automatic error identification pipeline that is able to identify annotation errors from the C-STS data with over 80{\%} F1 score. We also propose a new method that largely improves the performance over baselines on the C-STS data by training the models with the answers. Finally we discuss the conditionality annotation based on the typed-feature structure (TFS) of entity types. We show in examples that the TFS is able to provide a linguistic foundation for constructing C-STS data with new conditions.",
}
| Semantic textual similarity (STS) is a fundamental NLP task that measures the semantic similarity between a pair of sentences. In order to reduce the inherent ambiguity posed from the sentences, a recent work called Conditional STS (C-STS) has been proposed to measure the sentences{'} similarity conditioned on a certain aspect. Despite the popularity of C-STS, we find that the current C-STS dataset suffers from various issues that could impede proper evaluation on this task. In this paper, we reannotate the C-STS validation set and observe an annotator discrepancy on 55{\%} of the instances resulting from the annotation errors in the original label, ill-defined conditions, and the lack of clarity in the task definition. After a thorough dataset analysis, we improve the C-STS task by leveraging the models{'} capability to understand the conditions under a QA task setting. With the generated answers, we present an automatic error identification pipeline that is able to identify annotation errors from the C-STS data with over 80{\%} F1 score. We also propose a new method that largely improves the performance over baselines on the C-STS data by training the models with the answers. Finally we discuss the conditionality annotation based on the typed-feature structure (TFS) of entity types. We show in examples that the TFS is able to provide a linguistic foundation for constructing C-STS data with new conditions. | [
"Tu, Jingxuan",
"Xu, Keer",
"Yue, Liulu",
"Ye, Bingyang",
"Rim, Kyeongmin",
"Pustejovsky, James"
] | Linguistically Conditioned Semantic Textual Similarity | acl-long.64 | Poster | 2406.03673 | [
"https://github.com/brandeis-llc/L-CSTS"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.64/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.65.bib | @inproceedings{chu-etal-2024-navigate,
title = "Navigate through Enigmatic Labyrinth A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future",
author = "Chu, Zheng and
Chen, Jingchang and
Chen, Qianglong and
Yu, Weijiang and
He, Tao and
Wang, Haotian and
Peng, Weihua and
Liu, Ming and
Qin, Bing and
Liu, Ting",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.65",
pages = "1173--1203",
abstract = "Reasoning, a fundamental cognitive process integral to human intelligence, has garnered substantial interest within artificial intelligence.Notably, recent studies have revealed that chain-of-thought prompting significantly enhances LLM{'}s reasoning capabilities, which attracts widespread attention from both academics and industry.In this paper, we systematically investigate relevant research, summarizing advanced methods through a meticulous taxonomy that offers novel perspectives.Moreover, we delve into the current frontiers and delineate the challenges and future directions, thereby shedding light on future research.Furthermore, we engage in a discussion about open questions.We hope this paper serves as an introduction for beginners and fosters future research.Resources have been made publicly available at https://github.com/zchuz/CoT-Reasoning-Survey",
}
| Reasoning, a fundamental cognitive process integral to human intelligence, has garnered substantial interest within artificial intelligence.Notably, recent studies have revealed that chain-of-thought prompting significantly enhances LLM{'}s reasoning capabilities, which attracts widespread attention from both academics and industry.In this paper, we systematically investigate relevant research, summarizing advanced methods through a meticulous taxonomy that offers novel perspectives.Moreover, we delve into the current frontiers and delineate the challenges and future directions, thereby shedding light on future research.Furthermore, we engage in a discussion about open questions.We hope this paper serves as an introduction for beginners and fosters future research.Resources have been made publicly available at https://github.com/zchuz/CoT-Reasoning-Survey | [
"Chu, Zheng",
"Chen, Jingchang",
"Chen, Qianglong",
"Yu, Weijiang",
"He, Tao",
"Wang, Haotian",
"Peng, Weihua",
"Liu, Ming",
"Qin, Bing",
"Liu, Ting"
] | Navigate through Enigmatic Labyrinth A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future | acl-long.65 | Poster | 2309.15402 | [
"https://github.com/zchuz/cot-reasoning-survey"
] | https://huggingface.co/papers/2309.15402 | 0 | 0 | 0 | 10 | https://aclanthology.org/2024.acl-long.65/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.66.bib | @inproceedings{chu-etal-2024-timebench,
title = "{T}ime{B}ench: A Comprehensive Evaluation of Temporal Reasoning Abilities in Large Language Models",
author = "Chu, Zheng and
Chen, Jingchang and
Chen, Qianglong and
Yu, Weijiang and
Wang, Haotian and
Liu, Ming and
Qin, Bing",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.66",
pages = "1204--1228",
abstract = "Grasping the concept of time is a fundamental facet of human cognition, indispensable for truly comprehending the intricacies of the world.Previous studies typically focus on specific aspects of time, lacking a comprehensive temporal reasoning benchmark.To address this, we propose TimeBench, a comprehensive hierarchical temporal reasoning benchmark that covers a broad spectrum of temporal reasoning phenomena.TimeBench provides a thorough evaluation for investigating the temporal reasoning capabilities of large language models.We conduct extensive experiments on GPT-4, LLaMA2, and other popular LLMs under various settings.Our experimental results indicate a significant performance gap between the state-of-the-art LLMs and humans, highlighting that there is still a considerable distance to cover in temporal reasoning.Besides, LLMs exhibit capability discrepancies across different reasoning categories.Furthermore, we thoroughly analyze the impact of multiple aspects on temporal reasoning and emphasize the associated challenges.We aspire for TimeBench to serve as a comprehensive benchmark, fostering research in temporal reasoning.Code and data are available at https://github.com/zchuz/TimeBench.",
}
| Grasping the concept of time is a fundamental facet of human cognition, indispensable for truly comprehending the intricacies of the world.Previous studies typically focus on specific aspects of time, lacking a comprehensive temporal reasoning benchmark.To address this, we propose TimeBench, a comprehensive hierarchical temporal reasoning benchmark that covers a broad spectrum of temporal reasoning phenomena.TimeBench provides a thorough evaluation for investigating the temporal reasoning capabilities of large language models.We conduct extensive experiments on GPT-4, LLaMA2, and other popular LLMs under various settings.Our experimental results indicate a significant performance gap between the state-of-the-art LLMs and humans, highlighting that there is still a considerable distance to cover in temporal reasoning.Besides, LLMs exhibit capability discrepancies across different reasoning categories.Furthermore, we thoroughly analyze the impact of multiple aspects on temporal reasoning and emphasize the associated challenges.We aspire for TimeBench to serve as a comprehensive benchmark, fostering research in temporal reasoning.Code and data are available at https://github.com/zchuz/TimeBench. | [
"Chu, Zheng",
"Chen, Jingchang",
"Chen, Qianglong",
"Yu, Weijiang",
"Wang, Haotian",
"Liu, Ming",
"Qin, Bing"
] | TimeBench: A Comprehensive Evaluation of Temporal Reasoning Abilities in Large Language Models | acl-long.66 | Poster | 2311.17667 | [
"https://github.com/zchuz/timebench"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.66/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.67.bib | @inproceedings{chu-etal-2024-beamaggr,
title = "{B}eam{A}gg{R}: Beam Aggregation Reasoning over Multi-source Knowledge for Multi-hop Question Answering",
author = "Chu, Zheng and
Chen, Jingchang and
Chen, Qianglong and
Wang, Haotian and
Zhu, Kun and
Du, Xiyuan and
Yu, Weijiang and
Liu, Ming and
Qin, Bing",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.67",
pages = "1229--1248",
abstract = "Large language models (LLMs) have demonstrated strong reasoning capabilities.Nevertheless, they still suffer from factual errors when tackling knowledge-intensive tasks.Retrieval-augmented reasoning represents a promising approach.However, significant challenges still persist, including inaccurate and insufficient retrieval for complex questions, as well as difficulty in integrating multi-source knowledge.To address this, we propose Beam Aggregation Reasoning (BeamAggR), a reasoning framework for knowledge-intensive multi-hop QA.BeamAggR explores and prioritizes promising answers at each hop of question.Concretely, we parse the complex questions into trees, which include atom and composite questions, followed by bottom-up reasoning.For atomic questions, the LLM conducts reasoning on multi-source knowledge to get answer candidates.For composite questions, the LLM combines beam candidates, explores multiple reasoning paths through probabilistic aggregation, and prioritizes the most promising trajectory.Extensive experiments on four open-domain multi-hop reasoning datasets show that our method significantly outperforms SOTA methods by 8.5{\%}.Furthermore, our analysis reveals that BeamAggR elicits better knowledge collaboration and answer aggregation.",
}
| Large language models (LLMs) have demonstrated strong reasoning capabilities.Nevertheless, they still suffer from factual errors when tackling knowledge-intensive tasks.Retrieval-augmented reasoning represents a promising approach.However, significant challenges still persist, including inaccurate and insufficient retrieval for complex questions, as well as difficulty in integrating multi-source knowledge.To address this, we propose Beam Aggregation Reasoning (BeamAggR), a reasoning framework for knowledge-intensive multi-hop QA.BeamAggR explores and prioritizes promising answers at each hop of question.Concretely, we parse the complex questions into trees, which include atom and composite questions, followed by bottom-up reasoning.For atomic questions, the LLM conducts reasoning on multi-source knowledge to get answer candidates.For composite questions, the LLM combines beam candidates, explores multiple reasoning paths through probabilistic aggregation, and prioritizes the most promising trajectory.Extensive experiments on four open-domain multi-hop reasoning datasets show that our method significantly outperforms SOTA methods by 8.5{\%}.Furthermore, our analysis reveals that BeamAggR elicits better knowledge collaboration and answer aggregation. | [
"Chu, Zheng",
"Chen, Jingchang",
"Chen, Qianglong",
"Wang, Haotian",
"Zhu, Kun",
"Du, Xiyuan",
"Yu, Weijiang",
"Liu, Ming",
"Qin, Bing"
] | BeamAggR: Beam Aggregation Reasoning over Multi-source Knowledge for Multi-hop Question Answering | acl-long.67 | Oral | 2406.19820 | [
""
] | https://huggingface.co/papers/2406.19820 | 1 | 0 | 0 | 9 | https://aclanthology.org/2024.acl-long.67/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.68.bib | @inproceedings{yuan-etal-2024-analogykb,
title = "{ANALOGYKB}: Unlocking Analogical Reasoning of Language Models with A Million-scale Knowledge Base",
author = "Yuan, Siyu and
Chen, Jiangjie and
Sun, Changzhi and
Liang, Jiaqing and
Xiao, Yanghua and
Yang, Deqing",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.68",
pages = "1249--1265",
abstract = "Analogical reasoning is a fundamental cognitive ability of humans. However, current language models (LMs) still struggle to achieve human-like performance in analogical reasoning tasks due to a lack of resources for model training. In this work, we address this gap by proposing ANALOGYKB, a million-scale analogy knowledge base (KB) derived from existing knowledge graphs (KGs). ANALOGYKB identifies two types of analogies from the KGs: 1) analogies of the same relations, which can be directly extracted from the KGs, and 2) analogies of analogous relations, which are identified with a selection and filtering pipeline enabled by large language models (LLMs), followed by minor human efforts for data quality control. Evaluations on a series of datasets of two analogical reasoning tasks (analogy recognition and generation) demonstrate that ANALOGYKB successfully enables both smaller LMs and LLMs to gain better analogical reasoning capabilities. Resources of this paper can be found at https://github.com/siyuyuan/analogykb.",
}
| Analogical reasoning is a fundamental cognitive ability of humans. However, current language models (LMs) still struggle to achieve human-like performance in analogical reasoning tasks due to a lack of resources for model training. In this work, we address this gap by proposing ANALOGYKB, a million-scale analogy knowledge base (KB) derived from existing knowledge graphs (KGs). ANALOGYKB identifies two types of analogies from the KGs: 1) analogies of the same relations, which can be directly extracted from the KGs, and 2) analogies of analogous relations, which are identified with a selection and filtering pipeline enabled by large language models (LLMs), followed by minor human efforts for data quality control. Evaluations on a series of datasets of two analogical reasoning tasks (analogy recognition and generation) demonstrate that ANALOGYKB successfully enables both smaller LMs and LLMs to gain better analogical reasoning capabilities. Resources of this paper can be found at https://github.com/siyuyuan/analogykb. | [
"Yuan, Siyu",
"Chen, Jiangjie",
"Sun, Changzhi",
"Liang, Jiaqing",
"Xiao, Yanghua",
"Yang, Deqing"
] | ANALOGYKB: Unlocking Analogical Reasoning of Language Models with A Million-scale Knowledge Base | acl-long.68 | Poster | 2305.05994 | [
"https://github.com/siyuyuan/analogykb"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.68/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.69.bib | @inproceedings{feng-etal-2024-tasl,
title = "{T}a{SL}: Continual Dialog State Tracking via Task Skill Localization and Consolidation",
author = "Feng, Yujie and
Chu, Xu and
Xu, Yongxin and
Shi, Guangyuan and
Liu, Bo and
Wu, Xiao-Ming",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.69",
pages = "1266--1279",
abstract = "A practical dialogue system requires the capacity for ongoing skill acquisition and adaptability to new tasks while preserving prior knowledge. However, current methods for Continual Dialogue State Tracking (DST), a crucial function of dialogue systems, struggle with the catastrophic forgetting issue and knowledge transfer between tasks. We present TaSL, a novel framework for task skill localization and consolidation that enables effective knowledge transfer without relying on memory replay. TaSL uses a novel group-wise technique to pinpoint task-specific and task-shared areas. Additionally, a fine-grained skill consolidation strategy protects task-specific knowledge from being forgotten while updating shared knowledge for bi-directional knowledge transfer. As a result, TaSL strikes a balance between preserving previous knowledge and excelling at new tasks. Comprehensive experiments on various backbones highlight the significant performance improvements of TaSL, with a 7.6{\%} absolute increase in Avg. JGA and an 11{\%} absolute rise in BWT metrics over existing state-of-the-art methods. The source code is provided for reproducibility.",
}
| A practical dialogue system requires the capacity for ongoing skill acquisition and adaptability to new tasks while preserving prior knowledge. However, current methods for Continual Dialogue State Tracking (DST), a crucial function of dialogue systems, struggle with the catastrophic forgetting issue and knowledge transfer between tasks. We present TaSL, a novel framework for task skill localization and consolidation that enables effective knowledge transfer without relying on memory replay. TaSL uses a novel group-wise technique to pinpoint task-specific and task-shared areas. Additionally, a fine-grained skill consolidation strategy protects task-specific knowledge from being forgotten while updating shared knowledge for bi-directional knowledge transfer. As a result, TaSL strikes a balance between preserving previous knowledge and excelling at new tasks. Comprehensive experiments on various backbones highlight the significant performance improvements of TaSL, with a 7.6{\%} absolute increase in Avg. JGA and an 11{\%} absolute rise in BWT metrics over existing state-of-the-art methods. The source code is provided for reproducibility. | [
"Feng, Yujie",
"Chu, Xu",
"Xu, Yongxin",
"Shi, Guangyuan",
"Liu, Bo",
"Wu, Xiao-Ming"
] | TaSL: Continual Dialog State Tracking via Task Skill Localization and Consolidation | acl-long.69 | Poster | 2408.09857 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.69/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.70.bib | @inproceedings{dai-etal-2024-deepseekmoe,
title = "{D}eep{S}eek{M}o{E}: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models",
author = "Dai, Damai and
Deng, Chengqi and
Zhao, Chenggang and
Xu, R.x. and
Gao, Huazuo and
Chen, Deli and
Li, Jiashi and
Zeng, Wangding and
Yu, Xingkai and
Wu, Y. and
Xie, Zhenda and
Li, Y.k. and
Huang, Panpan and
Luo, Fuli and
Ruan, Chong and
Sui, Zhifang and
Liang, Wenfeng",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.70",
pages = "1280--1297",
abstract = "In the era of large language models, Mixture-of-Experts (MoE) is a promising architecture for managing computational costs when scaling up model parameters. However, conventional MoE architectures like GShard, which activate the top-$K$ out of $N$ experts, face challenges in ensuring expert specialization, i.e. each expert acquires non-overlapping and focused knowledge. In response, we propose the DeepSeekMoE architecture towards ultimate expert specialization. It involves two principal strategies: (1) finely segmenting the experts into $mN$ ones and activating $mK$ from them, allowing for a more flexible combination of activated experts; (2) isolating $K_s$ experts as shared ones, aiming at capturing common knowledge and mitigating redundancy in routed experts. Starting from a modest scale with 2B parameters, we demonstrate that DeepSeekMoE 2B achieves comparable performance with GShard 2.9B, which has 1.5 $\times$ expert parameters and computation. In addition, DeepSeekMoE 2B nearly approaches the performance of its dense counterpart with the same number of total parameters, which sets the upper bound of MoE models. Subsequently, we scale up DeepSeekMoE to 16B parameters and show that it achieves comparable performance with DeepSeek 7B and LLaMA2 7B, with only about 40{\%} of computations.",
}
| In the era of large language models, Mixture-of-Experts (MoE) is a promising architecture for managing computational costs when scaling up model parameters. However, conventional MoE architectures like GShard, which activate the top-$K$ out of $N$ experts, face challenges in ensuring expert specialization, i.e. each expert acquires non-overlapping and focused knowledge. In response, we propose the DeepSeekMoE architecture towards ultimate expert specialization. It involves two principal strategies: (1) finely segmenting the experts into $mN$ ones and activating $mK$ from them, allowing for a more flexible combination of activated experts; (2) isolating $K_s$ experts as shared ones, aiming at capturing common knowledge and mitigating redundancy in routed experts. Starting from a modest scale with 2B parameters, we demonstrate that DeepSeekMoE 2B achieves comparable performance with GShard 2.9B, which has 1.5 $\times$ expert parameters and computation. In addition, DeepSeekMoE 2B nearly approaches the performance of its dense counterpart with the same number of total parameters, which sets the upper bound of MoE models. Subsequently, we scale up DeepSeekMoE to 16B parameters and show that it achieves comparable performance with DeepSeek 7B and LLaMA2 7B, with only about 40{\%} of computations. | [
"Dai, Damai",
"Deng, Chengqi",
"Zhao, Chenggang",
"Xu, R.x.",
"Gao, Huazuo",
"Chen, Deli",
"Li, Jiashi",
"Zeng, Wangding",
"Yu, Xingkai",
"Wu, Y.",
"Xie, Zhenda",
"Li, Y.k.",
"Huang, Panpan",
"Luo, Fuli",
"Ruan, Chong",
"Sui, Zhifang",
"Liang, Wenfeng"
] | DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models | acl-long.70 | Poster | 2401.06066 | [
"https://github.com/deepseek-ai/deepseek-moe"
] | https://huggingface.co/papers/2401.06066 | 9 | 40 | 2 | 17 | https://aclanthology.org/2024.acl-long.70/ | [
"deepseek-ai/DeepSeek-Coder-V2-Instruct",
"deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct",
"deepseek-ai/deepseek-moe-16b-chat",
"deepseek-ai/deepseek-moe-16b-base",
"deepseek-ai/DeepSeek-Coder-V2-Base",
"deepseek-ai/DeepSeek-Coder-V2-Lite-Base",
"LoneStriker/DeepSeek-Coder-V2-Instruct-GGUF",
"LoneStriker/DeepSeek-Coder-V2-Lite-Instruct-GGUF",
"casperhansen/deepseek-coder-v2-instruct-awq",
"TechxGenus/DeepSeek-Coder-V2-Lite-Instruct-AWQ",
"CISCai/DeepSeek-Coder-V2-Lite-Instruct-SOTA-GGUF",
"qwp4w3hyb/DeepSeek-Coder-V2-Lite-Instruct-iMat-GGUF",
"bullerwins/DeepSeek-Coder-V2-Instruct-GGUF",
"QuantFactory/DeepSeek-Coder-V2-Lite-Base-GGUF",
"qwp4w3hyb/DeepSeek-Coder-V2-Instruct-iMat-GGUF",
"migtissera/DeepSeek-Coder-V2-Base",
"nesteggs/deepseek-moe-16b-chat",
"TechxGenus/DeepSeek-Coder-V2-Lite-Base-AWQ",
"BirdL/DeepSeek-Coder-V2-Lite-Instruct-FlashAttnPatch",
"QuantFactory/DeepSeek-Coder-V2-Lite-Instruct-GGUF",
"XelotX/DeepSeek-Coder-V2-Instruct-Original",
"sammmbhav/test-model",
"XelotX/DeepSeek-Coder-V2-Lite-Instruct-Original",
"R-Tacoz/deepseek-moe-16b-base-emboe"
] | [] | [
"eduagarcia/open_pt_llm_leaderboard",
"Justinrune/LLaMA-Factory",
"officialhimanshu595/llama-factory",
"Dev1559/QuizBot",
"patched-codes/patched-chat",
"puffy310/ZeroGPU-DeepSeek-V2-LiteCoder",
"hamxa500/deepseek-ai-DeepSeek-Coder-V2-Instruct",
"evelyn-lo/evelyn",
"Sunrusojsis/QuizBot",
"itsjakeo/deepseek-ai-DeepSeek-Coder-V2-Instruct",
"ad4r5hgs/flan-small-text-gen",
"BuiMinh/aaaaaaaaaaaaa",
"nubifere/vis-llm-ft",
"Dovakiins/qwerrwe",
"kenken999/fastapi_django_main_live"
] | 1 |
https://aclanthology.org/2024.acl-long.71.bib | @inproceedings{qian-etal-2024-grounding,
title = "Grounding Language Model with Chunking-Free In-Context Retrieval",
author = "Qian, Hongjin and
Liu, Zheng and
Mao, Kelong and
Zhou, Yujia and
Dou, Zhicheng",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.71",
pages = "1298--1311",
abstract = "This paper presents a novel Chunking-Free In-Context (CFIC) retrieval approach, specifically tailored for Retrieval-Augmented Generation (RAG) systems. Traditional RAG systems often struggle with grounding responses using precise evidence text due to the challenges of processing lengthy documents and filtering out irrelevant content. Commonly employed solutions, such as document chunking and adapting language models to handle longer contexts, have their limitations. These methods either disrupt the semantic coherence of the text or fail to effectively address the issues of noise and inaccuracy in evidence retrieval.The CFIC approach addresses these challenges by circumventing the conventional chunking process. It utilizes the encoded hidden states of documents for in-context retrieval, employing auto-aggressive decoding to accurately identify the specific evidence text required for user queries, eliminating the need for chunking. CFIC is further enhanced by incorporating two innovative decoding strategies, namely Constrained Sentence Prefix Decoding and Skip Decoding. These strategies not only improve the efficiency of the retrieval process but also ensure that the fidelity of the generated grounding text evidence is maintained.Our evaluations of CFIC on a range of open question answering datasets demonstrate its superiority in retrieving relevant and accurate information, offering a significant improvement over traditional methods. By doing away with the need for document chunking, CFIC presents a more streamlined, effective, and efficient retrieval solution, making it a valuable advancement in the field of RAG systems.",
}
| This paper presents a novel Chunking-Free In-Context (CFIC) retrieval approach, specifically tailored for Retrieval-Augmented Generation (RAG) systems. Traditional RAG systems often struggle with grounding responses using precise evidence text due to the challenges of processing lengthy documents and filtering out irrelevant content. Commonly employed solutions, such as document chunking and adapting language models to handle longer contexts, have their limitations. These methods either disrupt the semantic coherence of the text or fail to effectively address the issues of noise and inaccuracy in evidence retrieval.The CFIC approach addresses these challenges by circumventing the conventional chunking process. It utilizes the encoded hidden states of documents for in-context retrieval, employing auto-aggressive decoding to accurately identify the specific evidence text required for user queries, eliminating the need for chunking. CFIC is further enhanced by incorporating two innovative decoding strategies, namely Constrained Sentence Prefix Decoding and Skip Decoding. These strategies not only improve the efficiency of the retrieval process but also ensure that the fidelity of the generated grounding text evidence is maintained.Our evaluations of CFIC on a range of open question answering datasets demonstrate its superiority in retrieving relevant and accurate information, offering a significant improvement over traditional methods. By doing away with the need for document chunking, CFIC presents a more streamlined, effective, and efficient retrieval solution, making it a valuable advancement in the field of RAG systems. | [
"Qian, Hongjin",
"Liu, Zheng",
"Mao, Kelong",
"Zhou, Yujia",
"Dou, Zhicheng"
] | Grounding Language Model with Chunking-Free In-Context Retrieval | acl-long.71 | Poster | 2402.09760 | [
""
] | https://huggingface.co/papers/2402.09760 | 1 | 0 | 0 | 5 | https://aclanthology.org/2024.acl-long.71/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.72.bib | @inproceedings{bai-etal-2024-advancing,
title = "Advancing Abductive Reasoning in Knowledge Graphs through Complex Logical Hypothesis Generation",
author = "Bai, Jiaxin and
Wang, Yicheng and
Zheng, Tianshi and
Guo, Yue and
Liu, Xin and
Song, Yangqiu",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.72",
pages = "1312--1329",
abstract = "Abductive reasoning is the process of making educated guesses to provide explanations for observations. Although many applications require the use of knowledge for explanations, the utilization of abductive reasoning in conjunction with structured knowledge, such as a knowledge graph, remains largely unexplored. To fill this gap, this paper introduces the task of complex logical hypothesis generation, as an initial step towards abductive logical reasoning with KG. In this task, we aim to generate a complex logical hypothesis so that it can explain a set of observations. We find that the supervised trained generative model can generate logical hypotheses that are structurally closer to the reference hypothesis. However, when generalized to unseen observations, this training objective does not guarantee better hypothesis generation. To address this, we introduce the Reinforcement Learning from Knowledge Graph (RLF-KG) method, which minimizes differences between observations and conclusions drawn from generated hypotheses according to the KG. Experiments show that, with RLF-KG{'}s assistance, the generated hypotheses provide better explanations, and achieve state-of-the-art results on three widely used KGs.",
}
| Abductive reasoning is the process of making educated guesses to provide explanations for observations. Although many applications require the use of knowledge for explanations, the utilization of abductive reasoning in conjunction with structured knowledge, such as a knowledge graph, remains largely unexplored. To fill this gap, this paper introduces the task of complex logical hypothesis generation, as an initial step towards abductive logical reasoning with KG. In this task, we aim to generate a complex logical hypothesis so that it can explain a set of observations. We find that the supervised trained generative model can generate logical hypotheses that are structurally closer to the reference hypothesis. However, when generalized to unseen observations, this training objective does not guarantee better hypothesis generation. To address this, we introduce the Reinforcement Learning from Knowledge Graph (RLF-KG) method, which minimizes differences between observations and conclusions drawn from generated hypotheses according to the KG. Experiments show that, with RLF-KG{'}s assistance, the generated hypotheses provide better explanations, and achieve state-of-the-art results on three widely used KGs. | [
"Bai, Jiaxin",
"Wang, Yicheng",
"Zheng, Tianshi",
"Guo, Yue",
"Liu, Xin",
"Song, Yangqiu"
] | Advancing Abductive Reasoning in Knowledge Graphs through Complex Logical Hypothesis Generation | acl-long.72 | Poster | 2312.15643 | [
"https://github.com/hkust-knowcomp/abductivekgr"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.72/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.73.bib | @inproceedings{diao-etal-2024-active,
title = "Active Prompting with Chain-of-Thought for Large Language Models",
author = "Diao, Shizhe and
Wang, Pengcheng and
Lin, Yong and
Pan, Rui and
Liu, Xiang and
Zhang, Tong",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.73",
pages = "1330--1350",
abstract = "The increasing scale of large language models (LLMs) brings emergent abilities to various complex tasks requiring reasoning, such as arithmetic and commonsense reasoning. It is known that the effective design of task-specific prompts is critical for LLMs{'} ability to produce high-quality answers. In particular, an effective approach for complex question-and-answering tasks is example-based prompting with chain-of-thought (CoT) reasoning, which significantly improves the performance of LLMs. However, current CoT methods rely on a fixed set of human-annotated exemplars, which are not necessarily the most effective examples for different tasks. This paper proposes a new method, Active-Prompt, to adapt LLMs to different tasks with task-specific example prompts (annotated with human-designed CoT reasoning). For this purpose, we propose a solution to the key problem of determining which questions are the most important and helpful to annotate from a pool of task-specific queries. By borrowing ideas from the related problem of uncertainty-based active learning, we introduce several metrics to characterize the uncertainty so as to select the most uncertain questions for annotation. Experimental results demonstrate the superiority of our proposed method, achieving superior performance on eight complex reasoning tasks. Further analyses of different uncertainty metrics, pool sizes, zero-shot learning, and accuracy-uncertainty relationships demonstrate the effectiveness of our method.",
}
| The increasing scale of large language models (LLMs) brings emergent abilities to various complex tasks requiring reasoning, such as arithmetic and commonsense reasoning. It is known that the effective design of task-specific prompts is critical for LLMs{'} ability to produce high-quality answers. In particular, an effective approach for complex question-and-answering tasks is example-based prompting with chain-of-thought (CoT) reasoning, which significantly improves the performance of LLMs. However, current CoT methods rely on a fixed set of human-annotated exemplars, which are not necessarily the most effective examples for different tasks. This paper proposes a new method, Active-Prompt, to adapt LLMs to different tasks with task-specific example prompts (annotated with human-designed CoT reasoning). For this purpose, we propose a solution to the key problem of determining which questions are the most important and helpful to annotate from a pool of task-specific queries. By borrowing ideas from the related problem of uncertainty-based active learning, we introduce several metrics to characterize the uncertainty so as to select the most uncertain questions for annotation. Experimental results demonstrate the superiority of our proposed method, achieving superior performance on eight complex reasoning tasks. Further analyses of different uncertainty metrics, pool sizes, zero-shot learning, and accuracy-uncertainty relationships demonstrate the effectiveness of our method. | [
"Diao, Shizhe",
"Wang, Pengcheng",
"Lin, Yong",
"Pan, Rui",
"Liu, Xiang",
"Zhang, Tong"
] | Active Prompting with Chain-of-Thought for Large Language Models | acl-long.73 | Poster | 2302.12246 | [
"https://github.com/shizhediao/active-prompt"
] | https://huggingface.co/papers/2302.12246 | 1 | 0 | 0 | 4 | https://aclanthology.org/2024.acl-long.73/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.74.bib | @inproceedings{zhao-etal-2024-easygen,
title = "{E}asy{G}en: Easing Multimodal Generation with {B}i{D}iffuser and {LLM}s",
author = "Zhao, Xiangyu and
Liu, Bo and
Liu, Qijiong and
Shi, Guangyuan and
Wu, Xiao-Ming",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.74",
pages = "1351--1370",
abstract = "We present EasyGen, an efficient model designed to enhance multimodal understanding and generation by harnessing the capabilities of diffusion models and large language models (LLMs). Unlike existing multimodal models that predominately depend on encoders like CLIP or ImageBind and need ample amounts of training data to bridge modalities, EasyGen leverages BiDiffuser, a bidirectional conditional diffusion model, to foster more efficient modality interactions. EasyGen achieves text generation by training a projection layer linking BiDiffuser and an LLM, and facilities image generation by training an adapter to align the LLM{'}s text space with the BiDiffuser{'}s image space. Comprehensive quantitative and qualitative experiments show that EasyGen excels in data-efficient training, high-quality image generation, and extendibility, effectively addressing the challenges in multimodal generation.",
}
| We present EasyGen, an efficient model designed to enhance multimodal understanding and generation by harnessing the capabilities of diffusion models and large language models (LLMs). Unlike existing multimodal models that predominately depend on encoders like CLIP or ImageBind and need ample amounts of training data to bridge modalities, EasyGen leverages BiDiffuser, a bidirectional conditional diffusion model, to foster more efficient modality interactions. EasyGen achieves text generation by training a projection layer linking BiDiffuser and an LLM, and facilities image generation by training an adapter to align the LLM{'}s text space with the BiDiffuser{'}s image space. Comprehensive quantitative and qualitative experiments show that EasyGen excels in data-efficient training, high-quality image generation, and extendibility, effectively addressing the challenges in multimodal generation. | [
"Zhao, Xiangyu",
"Liu, Bo",
"Liu, Qijiong",
"Shi, Guangyuan",
"Wu, Xiao-Ming"
] | EasyGen: Easing Multimodal Generation with BiDiffuser and LLMs | acl-long.74 | Poster | 2310.08949 | [
"https://github.com/zxy556677/easygen"
] | https://huggingface.co/papers/2310.08949 | 0 | 1 | 0 | 5 | https://aclanthology.org/2024.acl-long.74/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.75.bib | @inproceedings{li-etal-2024-rewriting,
title = "Rewriting the Code: A Simple Method for Large Language Model Augmented Code Search",
author = "Li, Haochen and
Zhou, Xin and
Shen, Zhiqi",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.75",
pages = "1371--1389",
abstract = "In code search, the Generation-Augmented Retrieval (GAR) framework, which generates exemplar code snippets to augment queries, has emerged as a promising strategy to address the principal challenge of modality misalignment between code snippets and natural language queries, particularly with the demonstrated code generation capabilities of Large Language Models (LLMs). Nevertheless, our preliminary investigations indicate that the improvements conferred by such an LLM-augmented framework are somewhat constrained. This limitation could potentially be ascribed to the fact that the generated codes, albeit functionally accurate, frequently display a pronounced stylistic deviation from the ground truth code in the codebase. In this paper, we extend the foundational GAR framework and propose a simple yet effective method that additionally Rewrites the Code (ReCo) within the codebase for style normalization. Experimental results demonstrate that ReCo significantly boosts retrieval accuracy across sparse (up to 35.7{\%}), zero-shot dense (up to 27.6{\%}), and fine-tuned dense (up to 23.6{\%}) retrieval settings in diverse search scenarios. To further elucidate the advantages of ReCo and stimulate research in code style normalization, we introduce Code Style Similarity, the first metric tailored to quantify stylistic similarities in code. Notably, our empirical findings reveal the inadequacy of existing metrics in capturing stylistic nuances. The source code and data are available at https://github.com/Alex-HaochenLi/ReCo.",
}
| In code search, the Generation-Augmented Retrieval (GAR) framework, which generates exemplar code snippets to augment queries, has emerged as a promising strategy to address the principal challenge of modality misalignment between code snippets and natural language queries, particularly with the demonstrated code generation capabilities of Large Language Models (LLMs). Nevertheless, our preliminary investigations indicate that the improvements conferred by such an LLM-augmented framework are somewhat constrained. This limitation could potentially be ascribed to the fact that the generated codes, albeit functionally accurate, frequently display a pronounced stylistic deviation from the ground truth code in the codebase. In this paper, we extend the foundational GAR framework and propose a simple yet effective method that additionally Rewrites the Code (ReCo) within the codebase for style normalization. Experimental results demonstrate that ReCo significantly boosts retrieval accuracy across sparse (up to 35.7{\%}), zero-shot dense (up to 27.6{\%}), and fine-tuned dense (up to 23.6{\%}) retrieval settings in diverse search scenarios. To further elucidate the advantages of ReCo and stimulate research in code style normalization, we introduce Code Style Similarity, the first metric tailored to quantify stylistic similarities in code. Notably, our empirical findings reveal the inadequacy of existing metrics in capturing stylistic nuances. The source code and data are available at https://github.com/Alex-HaochenLi/ReCo. | [
"Li, Haochen",
"Zhou, Xin",
"Shen, Zhiqi"
] | Rewriting the Code: A Simple Method for Large Language Model Augmented Code Search | acl-long.75 | Oral | 2401.04514 | [
"https://github.com/alex-haochenli/reco"
] | https://huggingface.co/papers/2401.04514 | 0 | 0 | 0 | 3 | https://aclanthology.org/2024.acl-long.75/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.76.bib | @inproceedings{baes-etal-2024-multidimensional,
title = "A Multidimensional Framework for Evaluating Lexical Semantic Change with Social Science Applications",
author = "Baes, Naomi and
Haslam, Nick and
Vylomova, Ekaterina",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.76",
pages = "1390--1415",
abstract = "Historical linguists have identified multiple forms of lexical semantic change. We present a three-dimensional framework for integrating these forms and a unified computational methodology for evaluating them concurrently. The dimensions represent increases or decreases in semantic 1) sentiment (valence of a target word{'}s collocates), 2) intensity (emotional arousal of collocates or the frequency of intensifiers), and 3) breadth (diversity of contexts in which the target word appears). These dimensions can be complemented by evaluation of shifts in the frequency of the target words and the thematic content of its collocates. This framework enables lexical semantic change to be mapped economically and systematically and has applications in computational social science. We present an illustrative analysis of semantic shifts in \textit{mental health} and \textit{mental illness} in two corpora, demonstrating patterns of semantic change that illuminate contemporary concerns about pathologization, stigma, and concept creep.",
}
| Historical linguists have identified multiple forms of lexical semantic change. We present a three-dimensional framework for integrating these forms and a unified computational methodology for evaluating them concurrently. The dimensions represent increases or decreases in semantic 1) sentiment (valence of a target word{'}s collocates), 2) intensity (emotional arousal of collocates or the frequency of intensifiers), and 3) breadth (diversity of contexts in which the target word appears). These dimensions can be complemented by evaluation of shifts in the frequency of the target words and the thematic content of its collocates. This framework enables lexical semantic change to be mapped economically and systematically and has applications in computational social science. We present an illustrative analysis of semantic shifts in \textit{mental health} and \textit{mental illness} in two corpora, demonstrating patterns of semantic change that illuminate contemporary concerns about pathologization, stigma, and concept creep. | [
"Baes, Naomi",
"Haslam, Nick",
"Vylomova, Ekaterina"
] | A Multidimensional Framework for Evaluating Lexical Semantic Change with Social Science Applications | acl-long.76 | Poster | 2406.06052 | [
"https://github.com/naomibaes/lexical_semantic_change_framework"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.76/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.77.bib | @inproceedings{huang-etal-2024-mitigating,
title = "Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal",
author = "Huang, Jianheng and
Cui, Leyang and
Wang, Ante and
Yang, Chengyi and
Liao, Xinting and
Song, Linfeng and
Yao, Junfeng and
Su, Jinsong",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.77",
pages = "1416--1428",
abstract = "Large language models (LLMs) suffer from catastrophic forgetting during continual learning. Conventional rehearsal-based methods rely on previous training data to retain the model{'}s ability, which may not be feasible in real-world applications. When conducting continual learning based on a publicly-released LLM checkpoint, the availability of the original training data may be non-existent. To address this challenge, we propose a framework called Self-Synthesized Rehearsal (SSR) that uses the LLM to generate synthetic instances for rehearsal. Concretely, we first employ the base LLM for in-context learning to generate synthetic instances. Subsequently, we utilize the latest LLM to refine the instance outputs based on the synthetic inputs, preserving its acquired ability. Finally, we select diverse high-quality synthetic instances for rehearsal in future stages. Experimental results demonstrate that SSR achieves superior or comparable performance compared to conventional rehearsal-based approaches while being more data-efficient. Besides, SSR effectively preserves the generalization capabilities of LLMs in general domains.",
}
| Large language models (LLMs) suffer from catastrophic forgetting during continual learning. Conventional rehearsal-based methods rely on previous training data to retain the model{'}s ability, which may not be feasible in real-world applications. When conducting continual learning based on a publicly-released LLM checkpoint, the availability of the original training data may be non-existent. To address this challenge, we propose a framework called Self-Synthesized Rehearsal (SSR) that uses the LLM to generate synthetic instances for rehearsal. Concretely, we first employ the base LLM for in-context learning to generate synthetic instances. Subsequently, we utilize the latest LLM to refine the instance outputs based on the synthetic inputs, preserving its acquired ability. Finally, we select diverse high-quality synthetic instances for rehearsal in future stages. Experimental results demonstrate that SSR achieves superior or comparable performance compared to conventional rehearsal-based approaches while being more data-efficient. Besides, SSR effectively preserves the generalization capabilities of LLMs in general domains. | [
"Huang, Jianheng",
"Cui, Leyang",
"Wang, Ante",
"Yang, Chengyi",
"Liao, Xinting",
"Song, Linfeng",
"Yao, Junfeng",
"Su, Jinsong"
] | Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal | acl-long.77 | Poster | 2403.01244 | [
"https://github.com/deeplearnxmu/ssr"
] | https://huggingface.co/papers/2403.01244 | 0 | 0 | 0 | 8 | https://aclanthology.org/2024.acl-long.77/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.78.bib | @inproceedings{huang-etal-2024-enhancing,
title = "Enhancing Large Language Models in Coding Through Multi-Perspective Self-Consistency",
author = "Huang, Baizhou and
Lu, Shuai and
Wan, Xiaojun and
Duan, Nan",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.78",
pages = "1429--1450",
abstract = "Large language models (LLMs) have exhibited remarkable ability in code generation. However, generating the correct solution in a single attempt still remains a challenge. Prior works utilize verification properties in software engineering to verify and re-rank solutions in a majority voting manner. But the assumption behind them that generated verification properties have better qualities than solutions may not always hold. In this paper, we treat them equally as different perspectives of LLMs{'} reasoning processes. We propose the Multi-Perspective Self-Consistency (MPSC) framework incorporating both inter- and intra-consistency across outputs from multiple perspectives. Specifically, we prompt LLMs to generate diverse outputs from three perspectives, Solution, Specification and Test case, constructing a 3-partite graph. With two measure functions of consistency, we embed both inter- and intra-consistency information into the graph. The optimal choice of solutions is then determined based on analysis in the graph.MPSC significantly boosts performance of foundation models (ChatGPT in this paper) on various benchmarks, including HumanEval (+15.91{\%}), MBPP (+6.43{\%}) and CodeContests (+9.37{\%}), even surpassing GPT-4.",
}
| Large language models (LLMs) have exhibited remarkable ability in code generation. However, generating the correct solution in a single attempt still remains a challenge. Prior works utilize verification properties in software engineering to verify and re-rank solutions in a majority voting manner. But the assumption behind them that generated verification properties have better qualities than solutions may not always hold. In this paper, we treat them equally as different perspectives of LLMs{'} reasoning processes. We propose the Multi-Perspective Self-Consistency (MPSC) framework incorporating both inter- and intra-consistency across outputs from multiple perspectives. Specifically, we prompt LLMs to generate diverse outputs from three perspectives, Solution, Specification and Test case, constructing a 3-partite graph. With two measure functions of consistency, we embed both inter- and intra-consistency information into the graph. The optimal choice of solutions is then determined based on analysis in the graph.MPSC significantly boosts performance of foundation models (ChatGPT in this paper) on various benchmarks, including HumanEval (+15.91{\%}), MBPP (+6.43{\%}) and CodeContests (+9.37{\%}), even surpassing GPT-4. | [
"Huang, Baizhou",
"Lu, Shuai",
"Wan, Xiaojun",
"Duan, Nan"
] | Enhancing Large Language Models in Coding Through Multi-Perspective Self-Consistency | acl-long.78 | Poster | 2309.17272 | [
"https://github.com/skpig/MPSC"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.78/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.79.bib | @inproceedings{li-etal-2024-citation,
title = "Citation-Enhanced Generation for {LLM}-based Chatbots",
author = "Li, Weitao and
Li, Junkai and
Ma, Weizhi and
Liu, Yang",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.79",
pages = "1451--1466",
abstract = "Large language models (LLMs) exhibit powerful general intelligence across diverse scenarios, including their integration into chatbots. However, a vital challenge of LLM-based chatbots is that they may produce hallucinated content in responses, which significantly limits their applicability. Various efforts have been made to alleviate hallucination, such as retrieval augmented generation and reinforcement learning with human feedback, but most of them require additional training and data annotation. In this paper, we propose a novel post-hoc Citation-Enhanced Generation (CEG) approach combined with retrieval argumentation. Unlike previous studies that focus on preventing hallucinations during generation, our method addresses this issue in a post-hoc way. It incorporates a retrieval module to search for supporting documents relevant to the generated content, and employs a natural language inference-based citation generation module. Once the statements in the generated content lack of reference, our model can regenerate responses until all statements are supported by citations. Note that our method is a training-free plug-and-play plugin that is capable of various LLMs. Experiments on various hallucination-related datasets show our framework outperforms state-of-the-art methods in both hallucination detection and response regeneration on three benchmarks. Our code and datasets can be found at https://github.com/Tsinghua-dhy/CEG.",
}
| Large language models (LLMs) exhibit powerful general intelligence across diverse scenarios, including their integration into chatbots. However, a vital challenge of LLM-based chatbots is that they may produce hallucinated content in responses, which significantly limits their applicability. Various efforts have been made to alleviate hallucination, such as retrieval augmented generation and reinforcement learning with human feedback, but most of them require additional training and data annotation. In this paper, we propose a novel post-hoc Citation-Enhanced Generation (CEG) approach combined with retrieval argumentation. Unlike previous studies that focus on preventing hallucinations during generation, our method addresses this issue in a post-hoc way. It incorporates a retrieval module to search for supporting documents relevant to the generated content, and employs a natural language inference-based citation generation module. Once the statements in the generated content lack of reference, our model can regenerate responses until all statements are supported by citations. Note that our method is a training-free plug-and-play plugin that is capable of various LLMs. Experiments on various hallucination-related datasets show our framework outperforms state-of-the-art methods in both hallucination detection and response regeneration on three benchmarks. Our code and datasets can be found at https://github.com/Tsinghua-dhy/CEG. | [
"Li, Weitao",
"Li, Junkai",
"Ma, Weizhi",
"Liu, Yang"
] | Citation-Enhanced Generation for LLM-based Chatbots | acl-long.79 | Poster | 2402.16063 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.79/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.80.bib | @inproceedings{wen-etal-2024-transitive,
title = "Transitive Consistency Constrained Learning for Entity-to-Entity Stance Detection",
author = "Wen, Haoyang and
Hovy, Eduard and
Hauptmann, Alexander",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.80",
pages = "1467--1480",
abstract = "Entity-to-entity stance detection identifies the stance between a pair of entities with a directed link that indicates the source, target and polarity. It is a streamlined task without the complex dependency structure for structural sentiment analysis, while it is more informative compared to most previous work assuming that the source is the author. Previous work performs entity-to-entity stance detection training on individual entity pairs. However, stances between inter-connected entity pairs may be correlated. In this paper, we propose transitive consistency constrained learning, which first finds connected entity pairs and their stances, and adds an additional objective to enforce the transitive consistency. We explore consistency training on both classification-based and generation-based models and conduct experiments to compare consistency training with previous work and large language models with in-context learning. Experimental results illustrate that the inter-correlation of stances in political news can be used to improve the entity-to-entity stance detection model, while overly strict consistency enforcement may have a negative impact. In addition, we find that large language models struggle with predicting link direction and neutral labels in this task.",
}
| Entity-to-entity stance detection identifies the stance between a pair of entities with a directed link that indicates the source, target and polarity. It is a streamlined task without the complex dependency structure for structural sentiment analysis, while it is more informative compared to most previous work assuming that the source is the author. Previous work performs entity-to-entity stance detection training on individual entity pairs. However, stances between inter-connected entity pairs may be correlated. In this paper, we propose transitive consistency constrained learning, which first finds connected entity pairs and their stances, and adds an additional objective to enforce the transitive consistency. We explore consistency training on both classification-based and generation-based models and conduct experiments to compare consistency training with previous work and large language models with in-context learning. Experimental results illustrate that the inter-correlation of stances in political news can be used to improve the entity-to-entity stance detection model, while overly strict consistency enforcement may have a negative impact. In addition, we find that large language models struggle with predicting link direction and neutral labels in this task. | [
"Wen, Haoyang",
"Hovy, Eduard",
"Hauptmann, Alex",
"er"
] | Transitive Consistency Constrained Learning for Entity-to-Entity Stance Detection | acl-long.80 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.80/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.81.bib | @inproceedings{li-etal-2024-feature-adaptive,
title = "Feature-Adaptive and Data-Scalable In-Context Learning",
author = "Li, Jiahao and
Wang, Quan and
Zhang, Licheng and
Jin, Guoqing and
Mao, Zhendong",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.81",
pages = "1481--1494",
abstract = "In-context learning (ICL), which promotes inference with several demonstrations, has become a widespread paradigm to stimulate LLM capabilities for downstream tasks. Due to context length constraints, it cannot be further improved in spite of more training data, and general features directly from LLMs in ICL are not adaptive to the specific downstream task. In this paper, we propose a feature-adaptive and data-scalable in-context learning framework (FADS-ICL), which can leverage task-adaptive features to promote inference on the downstream task, with the supervision of beyond-context samples.Specifically, it first extracts general features of beyond-context samples via the LLM with ICL input form one by one, and introduces a task-specific modulator to perform feature refinement and prediction after fitting a specific downstream task. We conduct extensive experiments on FADS-ICL under varying data settings (4{\textasciitilde}128 shots) and LLM scale (0.8{\textasciitilde}70B) settings. Experimental results show that FADS-ICL consistently outperforms previous state-of-the-art methods by a significant margin under all settings, verifying the effectiveness and superiority of FADS-ICL. For example, under the 1.5B and 32 shots setting, FADS-ICL can achieve \textbf{+14.3} average accuracy from feature adaptation over vanilla ICL on 10 datasets, with \textbf{+6.2} average accuracy over the previous state-of-the-art method, and the performance can further improve with increasing training data.",
}
| In-context learning (ICL), which promotes inference with several demonstrations, has become a widespread paradigm to stimulate LLM capabilities for downstream tasks. Due to context length constraints, it cannot be further improved in spite of more training data, and general features directly from LLMs in ICL are not adaptive to the specific downstream task. In this paper, we propose a feature-adaptive and data-scalable in-context learning framework (FADS-ICL), which can leverage task-adaptive features to promote inference on the downstream task, with the supervision of beyond-context samples.Specifically, it first extracts general features of beyond-context samples via the LLM with ICL input form one by one, and introduces a task-specific modulator to perform feature refinement and prediction after fitting a specific downstream task. We conduct extensive experiments on FADS-ICL under varying data settings (4{\textasciitilde}128 shots) and LLM scale (0.8{\textasciitilde}70B) settings. Experimental results show that FADS-ICL consistently outperforms previous state-of-the-art methods by a significant margin under all settings, verifying the effectiveness and superiority of FADS-ICL. For example, under the 1.5B and 32 shots setting, FADS-ICL can achieve \textbf{+14.3} average accuracy from feature adaptation over vanilla ICL on 10 datasets, with \textbf{+6.2} average accuracy over the previous state-of-the-art method, and the performance can further improve with increasing training data. | [
"Li, Jiahao",
"Wang, Quan",
"Zhang, Licheng",
"Jin, Guoqing",
"Mao, Zhendong"
] | Feature-Adaptive and Data-Scalable In-Context Learning | acl-long.81 | Poster | 2405.10738 | [
"https://github.com/jiahaozhenbang/fads-icl"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.81/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.82.bib | @inproceedings{zhang-etal-2024-probing,
title = "Probing the Multi-turn Planning Capabilities of {LLM}s via 20 Question Games",
author = "Zhang, Yizhe and
Lu, Jiarui and
Jaitly, Navdeep",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.82",
pages = "1495--1516",
abstract = "Large language models (LLMs) are effective at answering questions that are clearly asked. However, when faced with ambiguous queries they can act unpredictably and produce incorrect outputs. This underscores the need for the development of intelligent agents capable of asking clarification questions to resolve ambiguities effectively. This capability requires complex understanding, state tracking, reasoning and planning over multiple conversational turns. However, directly measuring this can be challenging.In this paper, we offer a surrogate problem which assesses an LLMs{'}s capability to deduce an entity unknown to itself, but revealed to a judge, by asking the judge a series of queries. This entity-deducing game can serve as an evaluation framework to probe the conversational reasoning and planning capabilities of language models.We systematically evaluate various LLMs and discover significant differences in their performance on this task. We find that strong LLMs like GPT-4 outperform human players by a large margin. We further employ Behavior Cloning (BC) to examine whether a weaker model is capable of imitating a stronger model and generalizing to data or domains, using only the demonstrations from a stronger model. We finally propose to use Reinforcement Learning to enhance reasoning and planning capacity of Vicuna models through episodes of game playing, which lead to significant performance improvement. We hope that this problem offers insights into how autonomous agents could be trained to behave more intelligently in ambiguous circumstances.",
}
| Large language models (LLMs) are effective at answering questions that are clearly asked. However, when faced with ambiguous queries they can act unpredictably and produce incorrect outputs. This underscores the need for the development of intelligent agents capable of asking clarification questions to resolve ambiguities effectively. This capability requires complex understanding, state tracking, reasoning and planning over multiple conversational turns. However, directly measuring this can be challenging.In this paper, we offer a surrogate problem which assesses an LLMs{'}s capability to deduce an entity unknown to itself, but revealed to a judge, by asking the judge a series of queries. This entity-deducing game can serve as an evaluation framework to probe the conversational reasoning and planning capabilities of language models.We systematically evaluate various LLMs and discover significant differences in their performance on this task. We find that strong LLMs like GPT-4 outperform human players by a large margin. We further employ Behavior Cloning (BC) to examine whether a weaker model is capable of imitating a stronger model and generalizing to data or domains, using only the demonstrations from a stronger model. We finally propose to use Reinforcement Learning to enhance reasoning and planning capacity of Vicuna models through episodes of game playing, which lead to significant performance improvement. We hope that this problem offers insights into how autonomous agents could be trained to behave more intelligently in ambiguous circumstances. | [
"Zhang, Yizhe",
"Lu, Jiarui",
"Jaitly, Navdeep"
] | Probing the Multi-turn Planning Capabilities of LLMs via 20 Question Games | acl-long.82 | Poster | 2310.01468 | [
"https://github.com/apple/ml-entity-deduction-arena"
] | https://huggingface.co/papers/2310.01468 | 0 | 0 | 0 | 3 | https://aclanthology.org/2024.acl-long.82/ | [] | [
"yizheapple/entity-deduction-arena"
] | [] | 1 |
https://aclanthology.org/2024.acl-long.83.bib | @inproceedings{tu-etal-2024-waterbench,
title = "{W}ater{B}ench: Towards Holistic Evaluation of Watermarks for Large Language Models",
author = "Tu, Shangqing and
Sun, Yuliang and
Bai, Yushi and
Yu, Jifan and
Hou, Lei and
Li, Juanzi",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.83",
pages = "1517--1542",
abstract = "To mitigate the potential misuse of large language models (LLMs), recent research has developed watermarking algorithms, which restrict the generation process to leave an invisible trace for watermark detection. Due to the two-stage nature of the task, most studies evaluate the generation and detection separately, thereby presenting a challenge in unbiased, thorough, and applicable evaluations. In this paper, we introduce WaterBench, the first comprehensive benchmark for LLM watermarks, in which we design three crucial factors: (1) For benchmarking procedure, to ensure an apples-to-apples comparison, we first adjust each watermarking method{'}s hyper-parameter to reach the same watermarking strength, then jointly evaluate their generation and detection performance. (2) For task selection, we diversify the input and output length to form a five-category taxonomy, covering 9 tasks. (3) For evaluation metric, we adopt the GPT4-Judge for automatically evaluating the decline of instruction-following abilities after watermarking. We evaluate 4 open-source watermarks on 2 LLMs under 2 watermarking strengths and observe the common struggles for current methods on maintaining the generation quality. The code and data are available at https://github.com/THU-KEG/WaterBench.",
}
| To mitigate the potential misuse of large language models (LLMs), recent research has developed watermarking algorithms, which restrict the generation process to leave an invisible trace for watermark detection. Due to the two-stage nature of the task, most studies evaluate the generation and detection separately, thereby presenting a challenge in unbiased, thorough, and applicable evaluations. In this paper, we introduce WaterBench, the first comprehensive benchmark for LLM watermarks, in which we design three crucial factors: (1) For benchmarking procedure, to ensure an apples-to-apples comparison, we first adjust each watermarking method{'}s hyper-parameter to reach the same watermarking strength, then jointly evaluate their generation and detection performance. (2) For task selection, we diversify the input and output length to form a five-category taxonomy, covering 9 tasks. (3) For evaluation metric, we adopt the GPT4-Judge for automatically evaluating the decline of instruction-following abilities after watermarking. We evaluate 4 open-source watermarks on 2 LLMs under 2 watermarking strengths and observe the common struggles for current methods on maintaining the generation quality. The code and data are available at https://github.com/THU-KEG/WaterBench. | [
"Tu, Shangqing",
"Sun, Yuliang",
"Bai, Yushi",
"Yu, Jifan",
"Hou, Lei",
"Li, Juanzi"
] | WaterBench: Towards Holistic Evaluation of Watermarks for Large Language Models | acl-long.83 | Poster | 2311.07138 | [
"https://github.com/THU-KEG/WaterBench"
] | https://huggingface.co/papers/2311.07138 | 0 | 2 | 0 | 6 | https://aclanthology.org/2024.acl-long.83/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.84.bib | @inproceedings{zhao-etal-2024-dependency,
title = "Dependency Transformer Grammars: Integrating Dependency Structures into Transformer Language Models",
author = "Zhao, Yida and
Lou, Chao and
Tu, Kewei",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.84",
pages = "1543--1556",
abstract = "Syntactic Transformer language models aim to achieve better generalization through simultaneously modeling syntax trees and sentences. While prior work has been focusing on adding constituency-based structures to Transformers, we introduce Dependency Transformer Grammars (DTGs), a new class of Transformer language model with explicit dependency-based inductive bias. DTGs simulate dependency transition systems with constrained attention patterns by modifying attention masks, incorporate the stack information through relative positional encoding, and augment dependency arc representation with a combination of token embeddings and operation embeddings. When trained on a dataset of sentences annotated with dependency trees, DTGs achieve better generalization while maintaining comparable perplexity with Transformer language model baselines. DTGs also outperform recent constituency-based models, showing that dependency can better guide Transformer language models. Our code is released at https://github.com/zhaoyd1/Dep{\_}Transformer{\_}Grammars.",
}
| Syntactic Transformer language models aim to achieve better generalization through simultaneously modeling syntax trees and sentences. While prior work has been focusing on adding constituency-based structures to Transformers, we introduce Dependency Transformer Grammars (DTGs), a new class of Transformer language model with explicit dependency-based inductive bias. DTGs simulate dependency transition systems with constrained attention patterns by modifying attention masks, incorporate the stack information through relative positional encoding, and augment dependency arc representation with a combination of token embeddings and operation embeddings. When trained on a dataset of sentences annotated with dependency trees, DTGs achieve better generalization while maintaining comparable perplexity with Transformer language model baselines. DTGs also outperform recent constituency-based models, showing that dependency can better guide Transformer language models. Our code is released at https://github.com/zhaoyd1/Dep{\_}Transformer{\_}Grammars. | [
"Zhao, Yida",
"Lou, Chao",
"Tu, Kewei"
] | Dependency Transformer Grammars: Integrating Dependency Structures into Transformer Language Models | acl-long.84 | Poster | 2407.17406 | [
"https://github.com/zhaoyd1/dep_transformer_grammars"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.84/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.85.bib | @inproceedings{ma-etal-2024-non,
title = "A Non-autoregressive Generation Framework for End-to-End Simultaneous Speech-to-Any Translation",
author = "Ma, Zhengrui and
Fang, Qingkai and
Zhang, Shaolei and
Guo, Shoutao and
Feng, Yang and
Zhang, Min",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.85",
pages = "1557--1575",
abstract = "Simultaneous translation models play a crucial role in facilitating communication. However, existing research primarily focuses on text-to-text or speech-to-text models, necessitating additional cascade components to achieve speech-to-speech translation. These pipeline methods suffer from error propagation and accumulate delays in each cascade component, resulting in reduced synchronization between the speaker and listener. To overcome these challenges, we propose a novel non-autoregressive generation framework for simultaneous speech translation (NAST-S2$x$), which integrates speech-to-text and speech-to-speech tasks into a unified end-to-end framework.We develop a non-autoregressive decoder capable of concurrently generating multiple text or acoustic unit tokens upon receiving fixed-length speech chunks. The decoder can generate blank or repeated tokens and employ CTC decoding to dynamically adjust its latency. Experimental results show that NAST-S2$x$ outperforms state-of-the-art models in both speech-to-text and speech-to-speech tasks. It achieves high-quality simultaneous interpretation within a delay of less than 3 seconds and provides a 28{\mbox{$\times$}} decoding speedup in offline generation.",
}
| Simultaneous translation models play a crucial role in facilitating communication. However, existing research primarily focuses on text-to-text or speech-to-text models, necessitating additional cascade components to achieve speech-to-speech translation. These pipeline methods suffer from error propagation and accumulate delays in each cascade component, resulting in reduced synchronization between the speaker and listener. To overcome these challenges, we propose a novel non-autoregressive generation framework for simultaneous speech translation (NAST-S2$x$), which integrates speech-to-text and speech-to-speech tasks into a unified end-to-end framework.We develop a non-autoregressive decoder capable of concurrently generating multiple text or acoustic unit tokens upon receiving fixed-length speech chunks. The decoder can generate blank or repeated tokens and employ CTC decoding to dynamically adjust its latency. Experimental results show that NAST-S2$x$ outperforms state-of-the-art models in both speech-to-text and speech-to-speech tasks. It achieves high-quality simultaneous interpretation within a delay of less than 3 seconds and provides a 28{\mbox{$\times$}} decoding speedup in offline generation. | [
"Ma, Zhengrui",
"Fang, Qingkai",
"Zhang, Shaolei",
"Guo, Shoutao",
"Feng, Yang",
"Zhang, Min"
] | A Non-autoregressive Generation Framework for End-to-End Simultaneous Speech-to-Any Translation | acl-long.85 | Poster | [
"https://github.com/ictnlp/nast-s2x"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.85/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.86.bib | @inproceedings{liu-etal-2024-probing,
title = "Probing Language Models for Pre-training Data Detection",
author = "Liu, Zhenhua and
Zhu, Tong and
Tan, Chuanyuan and
Liu, Bing and
Lu, Haonan and
Chen, Wenliang",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.86",
pages = "1576--1587",
abstract = "Large Language Models (LLMs) have shown their impressive capabilities, while also raising concerns about the data contamination problems due to privacy issues and leakage of benchmark datasets in the pre-training phase. Therefore, it is vital to detect the contamination by checking whether an LLM has been pre-trained on the target texts. Recent studies focus on the generated texts and compute perplexities, which are superficial features and not reliable. In this study, we propose to utilize the probing technique for pre-training data detection by examining the model{'}s internal activations. Our method is simple and effective and leads to more trustworthy pre-training data detection. Additionally, we propose ArxivMIA, a new challenging benchmark comprising arxiv abstracts from Computer Science and Mathematics categories. Our experiments demonstrate that our method outperforms all baselines, and achieves state-of-the-art performance on both WikiMIA and ArxivMIA, with additional experiments confirming its efficacy.",
}
| Large Language Models (LLMs) have shown their impressive capabilities, while also raising concerns about the data contamination problems due to privacy issues and leakage of benchmark datasets in the pre-training phase. Therefore, it is vital to detect the contamination by checking whether an LLM has been pre-trained on the target texts. Recent studies focus on the generated texts and compute perplexities, which are superficial features and not reliable. In this study, we propose to utilize the probing technique for pre-training data detection by examining the model{'}s internal activations. Our method is simple and effective and leads to more trustworthy pre-training data detection. Additionally, we propose ArxivMIA, a new challenging benchmark comprising arxiv abstracts from Computer Science and Mathematics categories. Our experiments demonstrate that our method outperforms all baselines, and achieves state-of-the-art performance on both WikiMIA and ArxivMIA, with additional experiments confirming its efficacy. | [
"Liu, Zhenhua",
"Zhu, Tong",
"Tan, Chuanyuan",
"Liu, Bing",
"Lu, Haonan",
"Chen, Wenliang"
] | Probing Language Models for Pre-training Data Detection | acl-long.86 | Poster | 2406.01333 | [
"https://github.com/zhliu0106/probing-lm-data"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.86/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.87.bib | @inproceedings{zhang-etal-2024-analyzing,
title = "Analyzing Temporal Complex Events with Large Language Models? A Benchmark towards Temporal, Long Context Understanding",
author = "Zhang, Zhihan and
Cao, Yixin and
Ye, Chenchen and
Ma, Yunshan and
Liao, Lizi and
Chua, Tat-Seng",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.87",
pages = "1588--1606",
abstract = "The digital landscape is rapidly evolving with an ever-increasing volume of online news, emphasizing the need for swift and precise analysis of complex events.We refer to the complex events composed of many news articles over an extended period as Temporal Complex Event (TCE). This paper proposes a novel approach using Large Language Models (LLMs) to systematically extract and analyze the event chain within TCE, characterized by their key points and timestamps. We establish a benchmark, named TCELongBench, to evaluate the proficiency of LLMs in handling temporal dynamics and understanding extensive text. This benchmark encompasses three distinct tasks - reading comprehension, temporal sequencing, and future event forecasting. In the experiment, we leverage retrieval-augmented generation (RAG) method and LLMs with long context window to deal with lengthy news articles of TCE. Our findings indicate that models with suitable retrievers exhibit comparable performance with those utilizing long context window.",
}
| The digital landscape is rapidly evolving with an ever-increasing volume of online news, emphasizing the need for swift and precise analysis of complex events.We refer to the complex events composed of many news articles over an extended period as Temporal Complex Event (TCE). This paper proposes a novel approach using Large Language Models (LLMs) to systematically extract and analyze the event chain within TCE, characterized by their key points and timestamps. We establish a benchmark, named TCELongBench, to evaluate the proficiency of LLMs in handling temporal dynamics and understanding extensive text. This benchmark encompasses three distinct tasks - reading comprehension, temporal sequencing, and future event forecasting. In the experiment, we leverage retrieval-augmented generation (RAG) method and LLMs with long context window to deal with lengthy news articles of TCE. Our findings indicate that models with suitable retrievers exhibit comparable performance with those utilizing long context window. | [
"Zhang, Zhihan",
"Cao, Yixin",
"Ye, Chenchen",
"Ma, Yunshan",
"Liao, Lizi",
"Chua, Tat-Seng"
] | Analyzing Temporal Complex Events with Large Language Models? A Benchmark towards Temporal, Long Context Understanding | acl-long.87 | Poster | 2406.02472 | [
"https://github.com/Zhihan72/TCELongBench"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.87/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.88.bib | @inproceedings{han-etal-2024-ibsen,
title = "{IBSEN}: Director-Actor Agent Collaboration for Controllable and Interactive Drama Script Generation",
author = "Han, Senyu and
Chen, Lu and
Lin, Li-Min and
Xu, Zhengshan and
Yu, Kai",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.88",
pages = "1607--1619",
abstract = "Large language models have demonstrated their capabilities in storyline creation and human-like character role-playing. Current language model agents mainly focus on reasonable behaviors from the level of individuals, and their behaviors might be hard to constraint on the level of the whole storyline. In this paper we introduce IBSEN, a director-actor coordinate agent framework that generates drama scripts and makes the plot played by agents more controllable. The director agent writes plot outlines that the user desires to see, instructs the actor agents to role-play their characters, and reschedules the plot when human players participate in the scenario to ensure the plot is progressing towards the objective. To evaluate the framework, we create a novel drama plot that involves several actor agents and check the interactions between them under the instruction of the director agent. Evaluation results show that our framework could generate complete, diverse drama scripts from only a rough outline of plot objectives, meanwhile maintaining the characteristics of characters in the drama. Our codes and prompts are available at https://github.com/OpenDFM/ibsen.",
}
| Large language models have demonstrated their capabilities in storyline creation and human-like character role-playing. Current language model agents mainly focus on reasonable behaviors from the level of individuals, and their behaviors might be hard to constraint on the level of the whole storyline. In this paper we introduce IBSEN, a director-actor coordinate agent framework that generates drama scripts and makes the plot played by agents more controllable. The director agent writes plot outlines that the user desires to see, instructs the actor agents to role-play their characters, and reschedules the plot when human players participate in the scenario to ensure the plot is progressing towards the objective. To evaluate the framework, we create a novel drama plot that involves several actor agents and check the interactions between them under the instruction of the director agent. Evaluation results show that our framework could generate complete, diverse drama scripts from only a rough outline of plot objectives, meanwhile maintaining the characteristics of characters in the drama. Our codes and prompts are available at https://github.com/OpenDFM/ibsen. | [
"Han, Senyu",
"Chen, Lu",
"Lin, Li-Min",
"Xu, Zhengshan",
"Yu, Kai"
] | IBSEN: Director-Actor Agent Collaboration for Controllable and Interactive Drama Script Generation | acl-long.88 | Poster | 2407.01093 | [
"https://github.com/OpenDFM/ibsen"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.88/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.89.bib | @inproceedings{wang-etal-2024-language-model,
title = "Language Model Adaption for Reinforcement Learning with Natural Language Action Space",
author = "Wang, Jiangxing and
Li, Jiachen and
Han, Xiao and
Ye, Deheng and
Lu, Zongqing",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.89",
pages = "1620--1634",
abstract = "Reinforcement learning with natural language action space often suffers from the curse of dimensionality due to the combinatorial nature of the natural language. Previous research leverages pretrained language models to capture action semantics and reduce the size of the action space. However, since pretrained models are typically trained on general corpora, there can be an unpredictable mismatch between the priors encoded in pretrained models and the characteristics of the specific RL environment. To address this issue, we propose Mutual-Information Regularized Policy Optimization, MIPO. MIPO enables implicit and dynamic reduction of the action space. Starting from the prior provided by the pretrained language model, our method dynamically adjusts the prior during the learning process based on the guidance of mutual information regularization. Theoretically, we demonstrate that this policy optimization process leads to the monotonic improvement on the mutual-information regularized RL objective. Empirically, we conduct experiments in various environments and demonstrate the effectiveness of MIPO.",
}
| Reinforcement learning with natural language action space often suffers from the curse of dimensionality due to the combinatorial nature of the natural language. Previous research leverages pretrained language models to capture action semantics and reduce the size of the action space. However, since pretrained models are typically trained on general corpora, there can be an unpredictable mismatch between the priors encoded in pretrained models and the characteristics of the specific RL environment. To address this issue, we propose Mutual-Information Regularized Policy Optimization, MIPO. MIPO enables implicit and dynamic reduction of the action space. Starting from the prior provided by the pretrained language model, our method dynamically adjusts the prior during the learning process based on the guidance of mutual information regularization. Theoretically, we demonstrate that this policy optimization process leads to the monotonic improvement on the mutual-information regularized RL objective. Empirically, we conduct experiments in various environments and demonstrate the effectiveness of MIPO. | [
"Wang, Jiangxing",
"Li, Jiachen",
"Han, Xiao",
"Ye, Deheng",
"Lu, Zongqing"
] | Language Model Adaption for Reinforcement Learning with Natural Language Action Space | acl-long.89 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.89/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.90.bib | @inproceedings{sakurai-miyao-2024-evaluating,
title = "Evaluating Intention Detection Capability of Large Language Models in Persuasive Dialogues",
author = "Sakurai, Hiromasa and
Miyao, Yusuke",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.90",
pages = "1635--1657",
abstract = "We investigate intention detection in persuasive multi-turn dialogs employing the largest available Large Language Models (LLMs).Much of the prior research measures the intention detection capability of machine learning models without considering the conversational history.To evaluate LLMs{'} intention detection capability in conversation, we modified the existing datasets of persuasive conversation and created datasets using a multiple-choice paradigm.It is crucial to consider others{'} perspectives through their utterances when engaging in a persuasive conversation, especially when making a request or reply that is inconvenient for others.This feature makes the persuasive dialogue suitable for the dataset of measuring intention detection capability.We incorporate the concept of {`}face acts,{'} which categorize how utterances affect mental states.This approach enables us to measure intention detection capability by focusing on crucial intentions and to conduct comprehensible analysis according to intention types.",
}
| We investigate intention detection in persuasive multi-turn dialogs employing the largest available Large Language Models (LLMs).Much of the prior research measures the intention detection capability of machine learning models without considering the conversational history.To evaluate LLMs{'} intention detection capability in conversation, we modified the existing datasets of persuasive conversation and created datasets using a multiple-choice paradigm.It is crucial to consider others{'} perspectives through their utterances when engaging in a persuasive conversation, especially when making a request or reply that is inconvenient for others.This feature makes the persuasive dialogue suitable for the dataset of measuring intention detection capability.We incorporate the concept of {`}face acts,{'} which categorize how utterances affect mental states.This approach enables us to measure intention detection capability by focusing on crucial intentions and to conduct comprehensible analysis according to intention types. | [
"Sakurai, Hiromasa",
"Miyao, Yusuke"
] | Evaluating Intention Detection Capability of Large Language Models in Persuasive Dialogues | acl-long.90 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.90/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.91.bib | @inproceedings{jiang-etal-2024-longllmlingua,
title = "{L}ong{LLML}ingua: Accelerating and Enhancing {LLM}s in Long Context Scenarios via Prompt Compression",
author = "Jiang, Huiqiang and
Wu, Qianhui and
Luo, Xufang and
Li, Dongsheng and
Lin, Chin-Yew and
Yang, Yuqing and
Qiu, Lili",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.91",
pages = "1658--1677",
abstract = "In long context scenarios, large language models (LLMs) face three main challenges: higher computational cost, performance reduction, and position bias. Research indicates that LLM performance hinges on the density and position of key information in the input prompt. Inspired by these findings, we propose LongLLMLingua for prompt compression towards improving LLMs{'} perception of the key information to simultaneously address the three challenges. Our extensive evaluation across various long context scenarios demonstrates that LongLLMLingua not only enhances performance but also significantly reduces costs and latency. For instance, in the NaturalQuestions benchmark, LongLLMLingua boosts performance by up to 21.4{\%} with around 4x fewer tokens in GPT-3.5-Turbo, leading to substantial cost savings. It achieves a 94.0{\%} cost reduction in the LooGLE benchmark. Moreover, when compressing prompts of about 10k tokens at ratios of 2x-6x, LongLLMLingua can accelerate end-to-end latency by 1.4x-2.6x.",
}
| In long context scenarios, large language models (LLMs) face three main challenges: higher computational cost, performance reduction, and position bias. Research indicates that LLM performance hinges on the density and position of key information in the input prompt. Inspired by these findings, we propose LongLLMLingua for prompt compression towards improving LLMs{'} perception of the key information to simultaneously address the three challenges. Our extensive evaluation across various long context scenarios demonstrates that LongLLMLingua not only enhances performance but also significantly reduces costs and latency. For instance, in the NaturalQuestions benchmark, LongLLMLingua boosts performance by up to 21.4{\%} with around 4x fewer tokens in GPT-3.5-Turbo, leading to substantial cost savings. It achieves a 94.0{\%} cost reduction in the LooGLE benchmark. Moreover, when compressing prompts of about 10k tokens at ratios of 2x-6x, LongLLMLingua can accelerate end-to-end latency by 1.4x-2.6x. | [
"Jiang, Huiqiang",
"Wu, Qianhui",
"Luo, Xufang",
"Li, Dongsheng",
"Lin, Chin-Yew",
"Yang, Yuqing",
"Qiu, Lili"
] | LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression | acl-long.91 | Poster | 2310.06839 | [
"https://github.com/microsoft/LLMLingua"
] | https://huggingface.co/papers/2310.06839 | 2 | 3 | 0 | 7 | https://aclanthology.org/2024.acl-long.91/ | [] | [] | [
"microsoft/LLMLingua",
"microsoft/llmlingua-2",
"Vincentt/LLMLingua",
"themanas021/llmlingua-2",
"Arafath10/llmlingua-2",
"dryouviavant/llmlingua-2",
"loveitl/Promot-Compress",
"Almaatla/llmlingua-2"
] | 1 |
https://aclanthology.org/2024.acl-long.92.bib | @inproceedings{jin-etal-2024-persuading,
title = "Persuading across Diverse Domains: a Dataset and Persuasion Large Language Model",
author = "Jin, Chuhao and
Ren, Kening and
Kong, Lingzhen and
Wang, Xiting and
Song, Ruihua and
Chen, Huan",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.92",
pages = "1678--1706",
abstract = "Persuasive dialogue requires multi-turn following and planning abilities to achieve the goal of persuading users, which is still challenging even for state-of-the-art large language models (LLMs). Previous works focus on retrieval-based models or generative models in a specific domain due to a lack of data across multiple domains. In this paper, we leverage GPT-4 to create the first multi-domain persuasive dialogue dataset DailyPersuasion. Then we propose a general method named PersuGPT to learn a persuasion model based on LLMs through intent-to-strategy reasoning, which summarizes the intent of user{'}s utterance and reasons next strategy to respond. Moreover, we design a simulation-based preference optimization, which utilizes a learned user model and our model to simulate next turns and estimate their rewards more accurately. Experimental results on two datasets indicate that our proposed method outperforms all baselines in terms of automatic evaluation metric Win-Rate and human evaluation. The code and data are available at https://persugpt.github.io.",
}
| Persuasive dialogue requires multi-turn following and planning abilities to achieve the goal of persuading users, which is still challenging even for state-of-the-art large language models (LLMs). Previous works focus on retrieval-based models or generative models in a specific domain due to a lack of data across multiple domains. In this paper, we leverage GPT-4 to create the first multi-domain persuasive dialogue dataset DailyPersuasion. Then we propose a general method named PersuGPT to learn a persuasion model based on LLMs through intent-to-strategy reasoning, which summarizes the intent of user{'}s utterance and reasons next strategy to respond. Moreover, we design a simulation-based preference optimization, which utilizes a learned user model and our model to simulate next turns and estimate their rewards more accurately. Experimental results on two datasets indicate that our proposed method outperforms all baselines in terms of automatic evaluation metric Win-Rate and human evaluation. The code and data are available at https://persugpt.github.io. | [
"Jin, Chuhao",
"Ren, Kening",
"Kong, Lingzhen",
"Wang, Xiting",
"Song, Ruihua",
"Chen, Huan"
] | Persuading across Diverse Domains: a Dataset and Persuasion Large Language Model | acl-long.92 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.92/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.93.bib | @inproceedings{xiao-etal-2024-healme,
title = "{H}eal{M}e: Harnessing Cognitive Reframing in Large Language Models for Psychotherapy",
author = "Xiao, Mengxi and
Xie, Qianqian and
Kuang, Ziyan and
Liu, Zhicheng and
Yang, Kailai and
Peng, Min and
Han, Weiguang and
Huang, Jimin",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.93",
pages = "1707--1725",
abstract = "Large Language Models (LLMs) can play a vital role in psychotherapy by adeptly handling the crucial task of cognitive reframing and overcoming challenges such as shame, distrust, therapist skill variability, and resource scarcity. Previous LLMs in cognitive reframing mainly converted negative emotions to positive ones, but these approaches have limited efficacy, often not promoting clients{'} self-discovery of alternative perspectives. In this paper, we unveil the Helping and Empowering through Adaptive Language in Mental Enhancement (HealMe) model. This novel cognitive reframing therapy method effectively addresses deep-rooted negative thoughts and fosters rational, balanced perspectives. Diverging from traditional LLM methods, HealMe employs empathetic dialogue based on psychotherapeutic frameworks. It systematically guides clients through distinguishing circumstances from feelings, brainstorming alternative viewpoints, and developing empathetic, actionable suggestions. Moreover, we adopt the first comprehensive and expertly crafted psychological evaluation metrics, specifically designed to rigorously assess the performance of cognitive reframing, in both AI-simulated dialogues and real-world therapeutic conversations. Experimental results show that our model outperforms others in terms of empathy, guidance, and logical coherence, demonstrating its effectiveness and potential positive impact on psychotherapy.",
}
| Large Language Models (LLMs) can play a vital role in psychotherapy by adeptly handling the crucial task of cognitive reframing and overcoming challenges such as shame, distrust, therapist skill variability, and resource scarcity. Previous LLMs in cognitive reframing mainly converted negative emotions to positive ones, but these approaches have limited efficacy, often not promoting clients{'} self-discovery of alternative perspectives. In this paper, we unveil the Helping and Empowering through Adaptive Language in Mental Enhancement (HealMe) model. This novel cognitive reframing therapy method effectively addresses deep-rooted negative thoughts and fosters rational, balanced perspectives. Diverging from traditional LLM methods, HealMe employs empathetic dialogue based on psychotherapeutic frameworks. It systematically guides clients through distinguishing circumstances from feelings, brainstorming alternative viewpoints, and developing empathetic, actionable suggestions. Moreover, we adopt the first comprehensive and expertly crafted psychological evaluation metrics, specifically designed to rigorously assess the performance of cognitive reframing, in both AI-simulated dialogues and real-world therapeutic conversations. Experimental results show that our model outperforms others in terms of empathy, guidance, and logical coherence, demonstrating its effectiveness and potential positive impact on psychotherapy. | [
"Xiao, Mengxi",
"Xie, Qianqian",
"Kuang, Ziyan",
"Liu, Zhicheng",
"Yang, Kailai",
"Peng, Min",
"Han, Weiguang",
"Huang, Jimin"
] | HealMe: Harnessing Cognitive Reframing in Large Language Models for Psychotherapy | acl-long.93 | Poster | 2403.05574 | [
"https://github.com/elsa66666/healme"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.93/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.94.bib | @inproceedings{guo-etal-2024-multimodal,
title = "Multimodal Prompt Learning with Missing Modalities for Sentiment Analysis and Emotion Recognition",
author = "Guo, Zirun and
Jin, Tao and
Zhao, Zhou",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.94",
pages = "1726--1736",
abstract = "The development of multimodal models has significantly advanced multimodal sentiment analysis and emotion recognition. However, in real-world applications, the presence of various missing modality cases often leads to a degradation in the model{'}s performance. In this work, we propose a novel multimodal Transformer framework using prompt learning to address the issue of missing modalities. Our method introduces three types of prompts: generative prompts, missing-signal prompts, and missing-type prompts. These prompts enable the generation of missing modality features and facilitate the learning of intra- and inter-modality information. Through prompt learning, we achieve a substantial reduction in the number of trainable parameters. Our proposed method outperforms other methods significantly across all evaluation metrics. Extensive experiments and ablation studies are conducted to demonstrate the effectiveness and robustness of our method, showcasing its ability to effectively handle missing modalities. Codes are available at https://github.com/zrguo/MPLMM.",
}
| The development of multimodal models has significantly advanced multimodal sentiment analysis and emotion recognition. However, in real-world applications, the presence of various missing modality cases often leads to a degradation in the model{'}s performance. In this work, we propose a novel multimodal Transformer framework using prompt learning to address the issue of missing modalities. Our method introduces three types of prompts: generative prompts, missing-signal prompts, and missing-type prompts. These prompts enable the generation of missing modality features and facilitate the learning of intra- and inter-modality information. Through prompt learning, we achieve a substantial reduction in the number of trainable parameters. Our proposed method outperforms other methods significantly across all evaluation metrics. Extensive experiments and ablation studies are conducted to demonstrate the effectiveness and robustness of our method, showcasing its ability to effectively handle missing modalities. Codes are available at https://github.com/zrguo/MPLMM. | [
"Guo, Zirun",
"Jin, Tao",
"Zhao, Zhou"
] | Multimodal Prompt Learning with Missing Modalities for Sentiment Analysis and Emotion Recognition | acl-long.94 | Poster | 2407.05374 | [
"https://github.com/zrguo/MPLMM"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.94/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.95.bib | @inproceedings{yan-etal-2024-effective,
title = "An Effective Pronunciation Assessment Approach Leveraging Hierarchical Transformers and Pre-training Strategies",
author = "Yan, Bi-Cheng and
Li, Jiun-Ting and
Wang, Yi-Cheng and
Wang, Hsin Wei and
Lo, Tien-Hong and
Hsu, Yung-Chang and
Chao, Wei-Cheng and
Chen, Berlin",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.95",
pages = "1737--1747",
abstract = "Automatic pronunciation assessment (APA) manages to quantify a second language (L2) learner{'}s pronunciation proficiency in a target language by providing fine-grained feedback with multiple pronunciation aspect scores at various linguistic levels. Most existing efforts on APA typically parallelize the modeling process, namely predicting multiple aspect scores across various linguistic levels simultaneously. This inevitably makes both the hierarchy of linguistic units and the relatedness among the pronunciation aspects sidelined. Recognizing such a limitation, we in this paper first introduce HierTFR, a hierarchal APA method that jointly models the intrinsic structures of an utterance while considering the relatedness among the pronunciation aspects. We also propose a correlation-aware regularizer to strengthen the connection between the estimated scores and the human annotations. Furthermore, novel pre-training strategies tailored for different linguistic levels are put forward so as to facilitate better model initialization. An extensive set of empirical experiments conducted on the speechocean762 benchmark dataset suggest the feasibility and effectiveness of our approach in relation to several competitive baselines.",
}
| Automatic pronunciation assessment (APA) manages to quantify a second language (L2) learner{'}s pronunciation proficiency in a target language by providing fine-grained feedback with multiple pronunciation aspect scores at various linguistic levels. Most existing efforts on APA typically parallelize the modeling process, namely predicting multiple aspect scores across various linguistic levels simultaneously. This inevitably makes both the hierarchy of linguistic units and the relatedness among the pronunciation aspects sidelined. Recognizing such a limitation, we in this paper first introduce HierTFR, a hierarchal APA method that jointly models the intrinsic structures of an utterance while considering the relatedness among the pronunciation aspects. We also propose a correlation-aware regularizer to strengthen the connection between the estimated scores and the human annotations. Furthermore, novel pre-training strategies tailored for different linguistic levels are put forward so as to facilitate better model initialization. An extensive set of empirical experiments conducted on the speechocean762 benchmark dataset suggest the feasibility and effectiveness of our approach in relation to several competitive baselines. | [
"Yan, Bi-Cheng",
"Li, Jiun-Ting",
"Wang, Yi-Cheng",
"Wang, Hsin Wei",
"Lo, Tien-Hong",
"Hsu, Yung-Chang",
"Chao, Wei-Cheng",
"Chen, Berlin"
] | An Effective Pronunciation Assessment Approach Leveraging Hierarchical Transformers and Pre-training Strategies | acl-long.95 | Poster | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.95/ | [] | [] | [] | 0 |
||
https://aclanthology.org/2024.acl-long.96.bib | @inproceedings{li-wang-2024-detection,
title = "Detection-Correction Structure via General Language Model for Grammatical Error Correction",
author = "Li, Wei and
Wang, Houfeng",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.96",
pages = "1748--1763",
abstract = "Grammatical error correction (GEC) is a task dedicated to rectifying texts with minimal edits, which can be decoupled into two components: detection and correction. However, previous works have predominantly focused on direct correction, with no prior efforts to integrate both into a single model. Moreover, the exploration of the detection-correction paradigm by large language models (LLMs) remains underdeveloped. This paper introduces an integrated detection-correction structure, named DeCoGLM, based on the General Language Model (GLM). The detection phase employs a fault-tolerant detection template, while the correction phase leverages autoregressive mask infilling for localized error correction. Through the strategic organization of input tokens and modification of attention masks, we facilitate multi-task learning within a single model. Our model demonstrates competitive performance against the state-of-the-art models on English and Chinese GEC datasets. Further experiments present the effectiveness of the detection-correction structure in LLMs, suggesting a promising direction for GEC.",
}
| Grammatical error correction (GEC) is a task dedicated to rectifying texts with minimal edits, which can be decoupled into two components: detection and correction. However, previous works have predominantly focused on direct correction, with no prior efforts to integrate both into a single model. Moreover, the exploration of the detection-correction paradigm by large language models (LLMs) remains underdeveloped. This paper introduces an integrated detection-correction structure, named DeCoGLM, based on the General Language Model (GLM). The detection phase employs a fault-tolerant detection template, while the correction phase leverages autoregressive mask infilling for localized error correction. Through the strategic organization of input tokens and modification of attention masks, we facilitate multi-task learning within a single model. Our model demonstrates competitive performance against the state-of-the-art models on English and Chinese GEC datasets. Further experiments present the effectiveness of the detection-correction structure in LLMs, suggesting a promising direction for GEC. | [
"Li, Wei",
"Wang, Houfeng"
] | Detection-Correction Structure via General Language Model for Grammatical Error Correction | acl-long.96 | Poster | 2405.17804 | [
"https://github.com/GMago-LeWay/GECFramework"
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.96/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.97.bib | @inproceedings{zhu-etal-2024-generative,
title = "Generative Pre-trained Speech Language Model with Efficient Hierarchical Transformer",
author = "Zhu, Yongxin and
Su, Dan and
He, Liqiang and
Xu, Linli and
Yu, Dong",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.97",
pages = "1764--1775",
abstract = "While recent advancements in speech language models have achieved significant progress, they face remarkable challenges in modeling the long acoustic sequences of neural audio codecs. In this paper, we introduce \textbf{G}enerative \textbf{P}re-trained \textbf{S}peech \textbf{T}ransformer (GPST), a hierarchical transformer designed for efficient speech language modeling. GPST quantizes audio waveforms into two distinct types of discrete speech representations and integrates them within a hierarchical transformer architecture, allowing for a unified one-stage generation process and enhancing Hi-Res audio generation capabilities. By training on large corpora of speeches in an end-to-end unsupervised manner, GPST can generate syntactically consistent speech with diverse speaker identities. Given a brief 3-second prompt, GPST can produce natural and coherent personalized speech, demonstrating in-context learning abilities. Moreover, our approach can be easily extended to spoken cross-lingual speech generation by incorporating multi-lingual semantic tokens and universal acoustic tokens. Experimental results indicate that GPST significantly outperforms the existing speech language models in terms of word error rate, speech quality, and speaker similarity. See \url{https://youngsheen.github.io/GPST/demo} for demo samples.",
}
| While recent advancements in speech language models have achieved significant progress, they face remarkable challenges in modeling the long acoustic sequences of neural audio codecs. In this paper, we introduce \textbf{G}enerative \textbf{P}re-trained \textbf{S}peech \textbf{T}ransformer (GPST), a hierarchical transformer designed for efficient speech language modeling. GPST quantizes audio waveforms into two distinct types of discrete speech representations and integrates them within a hierarchical transformer architecture, allowing for a unified one-stage generation process and enhancing Hi-Res audio generation capabilities. By training on large corpora of speeches in an end-to-end unsupervised manner, GPST can generate syntactically consistent speech with diverse speaker identities. Given a brief 3-second prompt, GPST can produce natural and coherent personalized speech, demonstrating in-context learning abilities. Moreover, our approach can be easily extended to spoken cross-lingual speech generation by incorporating multi-lingual semantic tokens and universal acoustic tokens. Experimental results indicate that GPST significantly outperforms the existing speech language models in terms of word error rate, speech quality, and speaker similarity. See \url{https://youngsheen.github.io/GPST/demo} for demo samples. | [
"Zhu, Yongxin",
"Su, Dan",
"He, Liqiang",
"Xu, Linli",
"Yu, Dong"
] | Generative Pre-trained Speech Language Model with Efficient Hierarchical Transformer | acl-long.97 | Poster | 2406.00976 | [
""
] | https://huggingface.co/papers/2406.00976 | 1 | 0 | 0 | 5 | https://aclanthology.org/2024.acl-long.97/ | [] | [] | [] | 1 |
https://aclanthology.org/2024.acl-long.98.bib | @inproceedings{zhang-etal-2024-selene,
title = "Selene: Pioneering Automated Proof in Software Verification",
author = "Zhang, Lichen and
Lu, Shuai and
Duan, Nan",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.98",
pages = "1776--1789",
abstract = "Ensuring correctness is a pivotal aspect of software engineering. Among the various strategies available, software verification offers a definitive assurance of correctness. Nevertheless, writing verification proofs is resource-intensive and manpower-consuming, and there is a great need to automate this process. We introduce Selene in this paper, which is the first project-level automated proof benchmark constructed based on the real-world industrial-level operating system microkernel, seL4. Selene provides a comprehensive framework for end-to-end proof generation and a lightweight verification environment. Our experimental results with advanced large language models (LLMs), such as GPT-3.5-turbo and GPT-4, highlight the capabilities of LLMs in the domain of automated proof generation. Additionally, our further proposed augmentations indicate that the challenges presented by Selene can be mitigated in future research endeavors.",
}
| Ensuring correctness is a pivotal aspect of software engineering. Among the various strategies available, software verification offers a definitive assurance of correctness. Nevertheless, writing verification proofs is resource-intensive and manpower-consuming, and there is a great need to automate this process. We introduce Selene in this paper, which is the first project-level automated proof benchmark constructed based on the real-world industrial-level operating system microkernel, seL4. Selene provides a comprehensive framework for end-to-end proof generation and a lightweight verification environment. Our experimental results with advanced large language models (LLMs), such as GPT-3.5-turbo and GPT-4, highlight the capabilities of LLMs in the domain of automated proof generation. Additionally, our further proposed augmentations indicate that the challenges presented by Selene can be mitigated in future research endeavors. | [
"Zhang, Lichen",
"Lu, Shuai",
"Duan, Nan"
] | Selene: Pioneering Automated Proof in Software Verification | acl-long.98 | Poster | 2401.07663 | [
""
] | -1 | -1 | -1 | -1 | https://aclanthology.org/2024.acl-long.98/ | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.acl-long.99.bib | @inproceedings{li-etal-2024-dissecting,
title = "Dissecting Human and {LLM} Preferences",
author = "Li, Junlong and
Zhou, Fan and
Sun, Shichao and
Zhang, Yikai and
Zhao, Hai and
Liu, Pengfei",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.99",
pages = "1790--1811",
abstract = "As a relative quality comparison of model responses, human and Large Language Model (LLM) preferences serve as common alignment goals in model fine-tuning and criteria in evaluation. Yet, these preferences merely reflect broad tendencies, resulting in less explainable and controllable models with potential safety risks. In this work, we dissect the preferences of human and 32 different LLMs to understand their quantitative composition, using annotations from real-world user-model conversations for a fine-grained, scenario-wise analysis. We find that humans are less sensitive to errors, favor responses that support their stances, and show clear dislike when models admit their limits. On the contrary, advanced LLMs like GPT-4-Turbo emphasize correctness, clarity, and harmlessness more. Additionally, LLMs of similar sizes tend to exhibit similar preferences, regardless of their training methods, and fine-tuning for alignment does not significantly alter the preferences of pretrained-only LLMs. Finally, we show that preference-based evaluation can be intentionally manipulated. In both training-free and training-based settings, aligning a model with the preferences of judges boosts scores, while injecting the least preferred properties lowers them. This results in notable score shifts: up to 0.59 on MT-Bench (1-10 scale) and 31.94 on AlpacaEval 2.0 (0-100 scale), highlighting the significant impact of this strategic adaptation. We have made all resources of this project publicly available.",
}
| As a relative quality comparison of model responses, human and Large Language Model (LLM) preferences serve as common alignment goals in model fine-tuning and criteria in evaluation. Yet, these preferences merely reflect broad tendencies, resulting in less explainable and controllable models with potential safety risks. In this work, we dissect the preferences of human and 32 different LLMs to understand their quantitative composition, using annotations from real-world user-model conversations for a fine-grained, scenario-wise analysis. We find that humans are less sensitive to errors, favor responses that support their stances, and show clear dislike when models admit their limits. On the contrary, advanced LLMs like GPT-4-Turbo emphasize correctness, clarity, and harmlessness more. Additionally, LLMs of similar sizes tend to exhibit similar preferences, regardless of their training methods, and fine-tuning for alignment does not significantly alter the preferences of pretrained-only LLMs. Finally, we show that preference-based evaluation can be intentionally manipulated. In both training-free and training-based settings, aligning a model with the preferences of judges boosts scores, while injecting the least preferred properties lowers them. This results in notable score shifts: up to 0.59 on MT-Bench (1-10 scale) and 31.94 on AlpacaEval 2.0 (0-100 scale), highlighting the significant impact of this strategic adaptation. We have made all resources of this project publicly available. | [
"Li, Junlong",
"Zhou, Fan",
"Sun, Shichao",
"Zhang, Yikai",
"Zhao, Hai",
"Liu, Pengfei"
] | Dissecting Human and LLM Preferences | acl-long.99 | Poster | 2402.11296 | [
"https://github.com/gair-nlp/preference-dissection"
] | https://huggingface.co/papers/2402.11296 | 3 | 3 | 0 | 6 | https://aclanthology.org/2024.acl-long.99/ | [] | [
"GAIR/preference-dissection"
] | [
"GAIR/Preference-Dissection-Visualization"
] | 1 |
https://aclanthology.org/2024.acl-long.100.bib | @inproceedings{sun-etal-2024-unicoder,
title = "{U}ni{C}oder: Scaling Code Large Language Model via Universal Code",
author = "Sun, Tao and
Chai, Linzheng and
Yang, Jian and
Yin, Yuwei and
Guo, Hongcheng and
Liu, Jiaheng and
Wang, Bing and
Yang, Liqun and
Li, Zhoujun",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.100",
pages = "1812--1824",
abstract = "Intermediate reasoning or acting steps have successfully improved large language models (LLMs) for handling various downstream natural language processing (NLP) tasks.When applying LLMs for code generation, recent works mainly focus on directing the models to articulate intermediate natural-language reasoning steps, as in chain-of-thought (CoT) prompting, and then output code with the natural language or other structured intermediate steps. However, such output is not suitable for code translation or generation tasks since the standard CoT has different logical structures and forms of expression with the code. In this work, we introduce the universal code (UniCode) as the intermediate representation. It is a description of algorithm steps using a mix of conventions of programming languages, such as assignment operator, conditional operator, and loop. Hence, we collect an instruction dataset UniCoder-Instruct to train our model UniCoder on multi-task learning objectives. UniCoder-Instruct comprises natural-language questions, code solutions, and the corresponding universal code. The alignment between the intermediate universal code representation and the final code solution significantly improves the quality of the generated code. The experimental results demonstrate that UniCoder with the universal code significantly outperforms the previous prompting methods by a large margin, showcasing the effectiveness of the structural clues in pseudo-code.",
}
| Intermediate reasoning or acting steps have successfully improved large language models (LLMs) for handling various downstream natural language processing (NLP) tasks.When applying LLMs for code generation, recent works mainly focus on directing the models to articulate intermediate natural-language reasoning steps, as in chain-of-thought (CoT) prompting, and then output code with the natural language or other structured intermediate steps. However, such output is not suitable for code translation or generation tasks since the standard CoT has different logical structures and forms of expression with the code. In this work, we introduce the universal code (UniCode) as the intermediate representation. It is a description of algorithm steps using a mix of conventions of programming languages, such as assignment operator, conditional operator, and loop. Hence, we collect an instruction dataset UniCoder-Instruct to train our model UniCoder on multi-task learning objectives. UniCoder-Instruct comprises natural-language questions, code solutions, and the corresponding universal code. The alignment between the intermediate universal code representation and the final code solution significantly improves the quality of the generated code. The experimental results demonstrate that UniCoder with the universal code significantly outperforms the previous prompting methods by a large margin, showcasing the effectiveness of the structural clues in pseudo-code. | [
"Sun, Tao",
"Chai, Linzheng",
"Yang, Jian",
"Yin, Yuwei",
"Guo, Hongcheng",
"Liu, Jiaheng",
"Wang, Bing",
"Yang, Liqun",
"Li, Zhoujun"
] | UniCoder: Scaling Code Large Language Model via Universal Code | acl-long.100 | Poster | 2406.16441 | [
""
] | https://huggingface.co/papers/2406.16441 | 1 | 2 | 0 | 9 | https://aclanthology.org/2024.acl-long.100/ | [] | [] | [] | 1 |